NVIDIA HAS JOINED FORCES with New York University's Centre for Data Science (CDS) to develop next-generation 'deep learning' applications and algorithms for large-scale GPU-accelerated systems.
Nvidia is really pushing the notion of GPU-accelerated deep learning, which it says will help developers bring increasingly smarter artificial intelligence (AI) to everything from online image databases to household items and vehicles.
Nvidia is now working with Yann LeCun, who heads the CDS and directs AI research at Facebook, to push GPU-based deep learning even further forward.
"Tomorrow's advances in deep learning rely on new, more sophisticated algorithms. They're designed to help computers achieve - even surpass - human capabilities," explained Nvidia in a blog post.
"They also require the latest, most advanced computing technologies. This is where GPU technology comes in: GPUs are the go-to technology for deep learning, reducing the time it takes to train neural networks by days, even months."
Nvidia said that deep learning researchers are currently working on systems with only one GPU, which limits the number of training parameters and the size of the models researchers can develop.
However, distributing the deep learning training process among several GPUs allows the researchers to increase the size of the models that can be trained, and the number of models that can be tested, resulting in more accurate models and new classes of applications.
Recognising this, the CDS recently installed a new deep learning system called ScaLeNet, an eight-node Cirrascale cluster with Nvidia Tesla K80 dual-GPU accelerators.
"The new high-performance system will let researchers take on bigger challenges, and create deep learning models that let computers do human-like perceptual tasks," Nvidia said, adding that multi-GPU machines are a necessary tool for future progress in AI and deep learning.
Potential applications include self-driving cars, medical image analysis systems, real-time speech-to-speech translation, and systems that can truly understand natural language and have conversations with humans.
ScaLeNet will be also be used for research projects and educational programmes at CDS by a large community of faculty members, research scientists, post-doctoral fellows and graduate students.
One of the company's presentations at Nvidia's GTC conference in March this year focusing on deep learning centred on its CUDA Deep Neural Network library (cuDNN), a GPU-accelerated library of primitives for deep neural networks.
Nvidia said that cuDNN is designed for performance, ease of use and low memory overhead, and will be integrated into higher-level machine learning frameworks, such as the popular Caffe, Theano and Torch.
The simple, drop-in design allows developers to focus on designing and implementing neural net models rather than tuning for performance, while still achieving the high performance delivered by modern parallel computing hardware. µ
2020 is going to be digital carnage
It's a great shame if it strudel
Don't get it near your Apple Card
So says Bloomberg, at least