SAN JOSE: NVIDIA HAS ANNOUNCED Digits DevBox, a Linux-powered mini supercomputer, at its annual GPU Technology Conference (GTC) in California today.
Touted as being "the world's fastest desk-side deep learning machine", Nvidia Digits DevBox was unveiled by the firm's CEO and co-founder, Jen-Hsun Huang, who said the device is a powered by four Titan X GPUs, the firm's new graphics card which was also unveiled at the show today.
Huang said the DevBox has been designed for speeding up deep learning research and costs a whopping $15,000.
"We hope to sell a lot so researchers can easily and quickly get up to speed on deep learning to do real meaningful work," Huang explained.
However, he added that it won't be a mass-produced device, but built to order, meaning developers looking to buy one will have to apply online and, once the system is delivered, will get a direct contact at Nvidia to help them "advance their research".
The Digits DevBox was built by the Nvidia deep learning engineering team for its own R&D work, and every component of the box - from memory to I/O to power - has been optimised to deliver highly efficient performance for deep learning research.
It includes up to four dual-slot GPUs, up to 64GB DDR4, eight Asus X99 PCIe slots, two 48-port gen3 PCIe and CPU for PCIe, and up to 3x3 TB RAID 5, M2 SATA, or SSD.
It also comes pre-installed with all the software data scientists and researchers require to develop their own deep neural networks, Huang said.
Very early results of multi-GPU training show the Digits DevBox delivers almost four times higher performance than a single Titan X on key deep learning benchmarks, he added.
"Training AlexNet can be completed in only 13 hours with the Digits DevBox, compared to over two days with the best single-GPU PC, or over a month with a CPU-only system," Huang said.
The Digits DevBox will be available in May, arriving in a package that fits under a desk and plugs into an ordinary wall socket.
Also unveiled at GTC 2015, the Titan X is said to be the most advanced GPU ever created. The notable thing about the Titan X is that it costs just $999, and touts 12GB of RAM. That's the same frame buffer memory as its predecessor, the Titan Z, which was announced at the conference last year and costs triple the amount at $3,000.
The Titan X GPU is based on Nvidia's Maxwell architecture, and boasts eight billion transistors, 3,072 CUDA cores and 0.2 TFLOPS DP, making it a GPU with the highest single precision throughput the firm has ever created. µ
Own Black Friday with our list of this year's best offers
Wistron is reportedly gearing up to begin production
The move is set to happen by 2020
You'll definitely be able to get one in time for Christmas