INTEL HAS TAKEN the covers off its first dedicated chips for powering artificial intelligence (AI) workloads in the cloud.
The brace of chips arrive as part of the chipmaker's Nervana line-up, which has been focussed on powering AI systems for a while now. But the new NNP-T1000 and NNP-I1000 are Intel first ASIC-based chip designed for keeping cloud-powered and based AIs ticking along.
Having dedicated chips in cloud systems and data centres allow for the more effective powering of AIs functions, rather than simply rely on general compute CPUs to figure out all the neural network processing AIs need.
With these new Nervana Neural Network Processors (NNP), Intel has them set up for different uses. The NNP-T1000 has been designed to strike "the right balance between computing, communication and memory", meaning it can work with small clusters of servers and scale up to work on "the largest pod of supercomputers".
The NNP-I1000 has been designed for less powerful systems and is targeted at running machine learning and AI models in their inference state - basically putting trained smart systems into action - in real-world use cases and in small-form-factor devices and systems.
After sifting through Intel's jargon and promo speak, it looks like the NNP-T1000 will be the chip to handle the heavy-lifting of training AIs, while the NNP-I1000 seems to be the chip to run smart things, via the cloud, as they are put into action.
Everyone's favourite social network and questionable data wrangler Facebook is already on board the NNP train.
"We are excited to be working with Intel to deploy faster and more efficient inference compute with the Intel Nervana Neural Network Processor for inference and to extend support for our state-of-the-art deep learning compiler, Glow, to the NNP-I," said Misha Smelyanskiy, director of the AI System Co-Design division at Facebook.
Intel also revealed a new Movidius Vision Processing Unit (VPU), which is due in 2020. Though you'll have to wait for it, that hasn't stopped Intel from claiming it can deliver "more than 10 times the inference performance as the previous generation" and that it's 4.7 times more power-efficient than Nvidia's Jetson AGX Xavier system; we reckon Nvidia might have something to say about that.
As such, it looks like it's full steam ahead for Intel and AI as it heads towards 2020; we just wish it would kick out some new 10-nanometre desktop chips as well. µ
Not all it's Mac'd up to be
X marks the smart home
The lens said the better