GOOGLE HAS announced what is being seen as a seismic new development in AI hardware, with the launch of its latest Tensor Processing Unit (TPU) - dubbed simply TPU 2.0 and set to take over the company's ever-expanding data centres.
The processor announced at Google's I/O has been created using a similar methodology to that which shaped the neural networks we showed you last week at Nvidia GTC, in that the best way to design a neural network is to ask a neural network to do it. Each neural network designs a neural network and the best one is used as a template to design a better one, and so on.
The result, two years in the making, spans four chips, capable of handling 180 trillion floating point operations per second (180 teraflops). These sequenced together in what the company calls a ‘TPU pod' can bring a staggering 11,500 teraflops.
To put this is perpective, we're looking at shaving about 3/4 off the length of a job compared with a CPU configuration, and it could even make Nvidia's GPU bods sweat. The new Tesla V100 launched at Nvidia GTC last week tops out at 120 teraflops and we all thought that was a game-changer. Which of course it is, but… woah.
So it's all going to come down to pricing and availability. Nvidia has shown its chops with pricing and availability to the public at large. Google, less so, though we fully expect the commercial applications of the TPU 2.0 are far from lost on the search giant.
Many AI researchers have said that they're now not limited by the concepts, but the ability to execute with current hardware, and both recent announcements are likely to be a huge shot in the arm.
The technology is chiefly aimed at improving artificial intelligence, which will pass down through Google's entire ecosystem, whether in its Google Assistant's suggestions, the new Google Lens architecture for photo recognition, or Google's work in proactive diagnosis of illnesses.
Google has said it will be giving 1000 TPU 2.0 units to researchers free of charge to, you know, just see what happens. The only price is that they will have to open source the research, which seems worth it to us. µ
Oh and it'll also help give aural pleasure
But it might still not be enough to make virtual reality super appealing
And a ridiculous competition
Now you can talk to your silly-looking earbuds too