NVIDIA HAS ANNOUNCED a series of updates to its GPU-accelerated deep learning software that it claims will double deep learning training performance.
The software update arrive as Nvidia's Digits Deep Learning GPU Training System version 2 (Digits 2) and the CUDA Deep Neural Network library version 3 (cuDNN 3).
The company said that these will allow data scientists and researchers to boost deep learning projects and product development by "creating more accurate neural networks through faster model training and more sophisticated model design".
Digits 2 is aimed at data scientists and introduces automatic scaling of neural network training across multiple high-performance GPUs. Nvidia claimed that this can double the speed of deep neural network training for image classification compared with a single GPU.
The company described it as the first all-in-one graphical system that guides users through the process of designing, training and validating deep neural networks for image classification.
"The new automatic multi-GPU scaling capability in Digits 2 maximises the available GPU resources by automatically distributing the deep learning training workload across all of the GPUs in the system," Nvidia said.
"Using Digits 2, [our] engineers trained the well-known AlexNet neural network model more than two times faster on four Nvidia Maxwell architecture-based GPUs compared to a single GPU."
CuDNN 3 is aimed at deep learning researchers and features optimised data storage in GPU memory for the training of larger, more sophisticated neural networks.
Nvidia said that the cuDNN 3 update also provides higher performance than cuDNN 2, enabling researchers to train neural networks up to two times faster on a single GPU.
"The new cuDNN 3 library is expected to be integrated into forthcoming versions of the deep learning frameworks Caffe, Minerva, Theano and Torch, which are widely used to train deep neural networks," explained the firm.
"It adds support for 16-bit floating point data storage in GPU memory, doubling the amount of data that can be stored and optimising memory bandwidth. With this capability, cuDNN 3 enables researchers to train larger and more sophisticated neural networks."
CuDNN 3 is also touted to deliver significant performance acceleration compared with cuDNN 2 for training neural networks on a single GPU.
"High-performance GPUs are the foundational technology powering deep learning research and product development at universities and major web service companies," said Nvidia's VP of accelerated computing, Ian Buck.
"We're working closely with data scientists, framework developers and the deep learning community to apply the most powerful GPU technologies and push the bounds of what's possible."
The Digits 2 Preview release is available now as a free download for registered developers, and the cuDNN 3 library is expected to be available in major deep learning frameworks "in the coming months".
Nvidia still focuses heavily on producing GPU technology and software features in power-dedicated graphics cards for gaming enthusiasts, but this announcement is another example of how Nvidia is now, more than ever, widening its horizons.
Nvidia's GPU Technology Conference in March showed how the company is branching out from its core gaming market into the potentially society-changing areas of artificial intelligence and deep learning, and is no longer just a gaming firm.
Nvidia used the event to push the notion of GPU-accelerated deep learning, which it says will help developers bring increasingly smarter AI to everything from online image databases to household items and vehicles. µ
Comet Lake is making a splash already
Casemaker expects a 'slimmer, lighter' stylus
Fetch the popcorn