NVIDIA HAS ANNOUNCED an end-to-end hyperscale data centre platform that it says will let web services companies accelerate their machine learning workloads and power advanced artificial intelligence (AI) applications.
Nvidia's hyperscale line consists of two accelerators. One aims to let researchers design new deep neural networks more quickly for the increasing number of applications they want to power with artificial intelligence (AI). The other is a low-power accelerator designed to deploy these networks across the data centre. The line also includes a suite of GPU-accelerated libraries, Nvidia said.
The Nvidia hyperscale accelerator line was created to accelerate machine learning workloads while increasing the throughput of data centres. It includes new additions to the Tesla platform, including: the Tesla M40 GPU, which Nvidia claims is "the most powerful accelerator designed for training deep neural networks"; the Tesla M4 GPU, a low-power, small form-factor accelerator for machine learning inference; and Hyperscale Suite software, which is designed for machine learning and video processing.
The Nvidia Tesla M40 GPU accelerator allows data scientists to save time while training their deep neural networks against massive amounts of data for higher overall accuracy. It is said to reduce training time by a factor of eight compared with CPUs, has been designed and tested for high reliability in data centre environments, and features scale-out performance to support the firm's GPUDirect software, allowing fast multi-node neural network training.
The Tesla M4 accelerator is a low-power GPU built for hyperscale environments but optimised for demanding, high-growth web services applications, including video transcoding, image and video processing. It features higher throughput, transcoding and analysing up to five times more simultaneous video streams compared with CPUs, and a user-selectable power profile. Nvidia said the Tesla M4 consumes 50-75 watts of power, and delivers up to 10 times better energy efficiency than a CPU for video processing and machine learning algorithms.
The Hyperscale Suite software includes tools for both developers and data centre managers, and is designed for web services deployments, including the firm's cuDNN, GPU-accelerated FFmpeg multimedia software, GPU REST Engine and Image Compute Engine.
Together, these accelerators are said to enable developers to use Nvidia's Tesla Accelerated Computing Platform to drive machine learning in hyperscale data centres and thus create unprecedented AI-based applications. Nvidia thinks this is important because of the recent "flood of web applications" that are racing to incorporate AI capabilities, which are being made possible due to advances in machine learning.
"The artificial intelligence race is on," said Nvidia co-founder and CEO Jen-Hsun Huang. "Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the internet and cloud computing. Industries ranging from consumer cloud services, automotive and health care are being revolutionised as we speak.
"Machine learning is the grand computational challenge of our generation. We created the Tesla hyperscale accelerator line to give machine learning a 10X boost. The time and cost savings to data centres will be significant."
The Tesla M40 GPU accelerator and Hyperscale Suite software will be available later this year, while the Tesla M4 GPU will be available in the first quarter of 2016. µ
Linux hits the DeX
The Net' is closing in
Firm was quick to CClean up after the attack
Sorry (not Siri)