NVIDIA REALLY WANTS CHATTY ROBOTS to become a reality, and it's a step closer to doing that by achieving record-breaking performance in training the world's largest artificial intelligence language model.
Team Green's AI platform was able to train the BERT model, widely considered the most advanced AI language model, in 53 minutes, which is the first time an AI system has managed to get the BERT-Large model up and running in less than an hour.
Once trained, the model was able to put its knowledge to use - in a process known as inference - to achieve results in two milliseconds, which set a new record; previously a 10 millisecond response time was considered a high standard.
To do all this Nvidia brought one of its SuperPOD systems to bear on BERT. Such a system is made of 92 of Team Green's DGX-2H systems, which run 1,472 V100 GPUs. Bringing a serious amount of power to crunch through BERT to knock normal training times of several days to less than an hour. For the inference side of things, Nvidia used its T4 GPUs, which tap into the company's TensorRT deep-learning tech.
The average garden variety AI developer might not have access to such tech firepower, so Nvidia is making its BERT training code and a "TensorRT BERT Sample" available on GitHub, so others can benefit from its research.
We'd have packed up and called it a day there, but Nvidia decided to push things further and developed the world's largest language model, which is some 24 times the size of BERT-Large. Using the 'Transformers' underlying tech building block to BERT, Nvidia' boffins created the massive language model and dubbed it "Megatron" - see what they did there?
Team Green is offering up PyTorch code used to train Megatron to developers who want to create their own language models based on the Transformer tech.
If you're wondering what all this means for the average Jill or Joe on the street, then we hear you buddy, as this all sounds like reasonably complex stuff. But basically it'll translate into AI systems and smart software that are more effective at understanding conversational language and complex questions; think better search engine results and less bumbling virtual assistants.
So Nvidia's breakthrough has some promising potential for future smart tech and AIs. But, if the rise of machines comes in the form of a killer robot that wants to discuss Tolstoy before it pummels the life out of us, then we'll be levying blame at Nvidia. µ
Much a (dil)do about nothing
Neither the time nor the face
The tiny tweaks are coming thick and fast now
Gitting more secure