HIGH-TECH INSTITUTION MIT has developed a neural network chip that could reduce the power consumption of devices by a whopping 95 per cent.
Ideal for battery-powered gadgets to take advantage of more complex neural networking systems, MIT said the chip could even make it practical to run neural networks locally on smartphones or even to embed them in household appliances.
"Most recent advances in artificial-intelligence systems such as speech- or face-recognition programs have come courtesy of neural networks, densely interconnected meshes of simple information processors that learn to perform tasks by analyzing huge sets of training data," the university explained.
"But neural nets are large, and their computations are energy intensive, so they're not very practical for handheld devices. Most smartphone apps that rely on neural nets simply upload data to internet servers, which process it and send the results back to the phone."
As a result, the MIT researchers have developed the special-purpose AI chip to increase the speed of neural-network computations by three to seven times over its predecessors, while reducing power consumption up to 95 percent.
"The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations," expained Avishek Biswas, an MIT graduate student who led the new chip's development.
"Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption."
However, Biswas - who conducted the research alongside his thesis advisor, Anantha Chandrakasan, dean of MIT's School of Engineering - said these algorithms can be simplified to one specific operation, called "the dot product".
"Our approach was [to] implement this dot-product functionality inside the memory so that you don't need to transfer this data back and forth," he said, which he added improves efficiency by replicating the human brain.
That's because, in the chip, a node's input values are converted into electrical voltages and then multiplied by the appropriate weights.
"Summing the products is simply a matter of combining the voltages," Biswas said. "Only the combined voltages are converted back into a digital representation and stored for further processing."
The prototype chip can thus calculate dot products for multiple nodes, that is, 16 at a time in a single step, instead of shuttling between a processor and memory for every computation.
IBM's vice president of AI Dario Gil, labelled the development as "a huge step forward".
"The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays," he said.
"It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT in the future." µ
Samsung flagship will be made available in 64GB, 256GB and 512GB variants
Rogue employee allegedly passed sensitive info to competitors
And it's ARM-ed for crunching mind-boggling workloads
'The motion is, free coffee for all sentient robots'