ONE DAY NVIDIA WILL CREATE what's effectively Star Trek's Holodeck, at least going by its jumps in artificial intelligence image tech, which can now cleverly recreate real-world videos into 3D renders.
Team Green has achieved this by using its DX-1 supercomputer that's packed full of Tensor core - the cores found in Nvidia's high-end GeForce RTX cards - to power deep learning neural networks to work out how to create 3D worlds from existing videos.
Nvidia demonstrated the tech working with footage accoutred from the dashcam of a self-driving car, which the AI system then took and extracted "high-level semantics" from; think the extraction of key details from an image, such as where a car is and what the edges of an object are.
From there the neural network fills and colours the image in using Epic's Unreal Engine 4, to effectively create a 3D world rendered by an AI not some graphical artists.
Naturally, the AI needed to be trained first on a host of videos of cites to help the clever tech understand how to render urban environments. But once done the results were, from the videos Nvidia has popped online, pretty bloody impressive.
"NVIDIA has been creating new ways to generate interactive graphics for 25 years - and this is the first time we can do this with a neural network," said Bryan Catanzaro, vice president of applied deep learning at Nvidia. "Neural networks - specifically - generative models are going to change the way graphics are created."
"One of the main obstacles developers face when creating virtual worlds, whether for game development, telepresence, or other applications is that creating the content is expensive. This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world."
As graphics tech research, Nvidia's efforts here are pretty impressive. But such tech could have a significant impact on the future of graphics rendering, development and art.
Take game development for example, unless you have a big budget, creating convincing 3D worlds takes time and efforts. Nivida's AI system could effectively create the foundations of a 3D world, then allow digital artists to add in the detail on top, thereby reducing the time it takes to create a 3D game or experience.
For indie developers with access to the Unreal Engine 4 but limited resources, such tech could be the key to creating visually impressive games without donating their gran to science.
And the smart tech could also have other benefits to folks who make use of GPU and machine learning systems, according to the Nvidia's researchers.
"The capability to model and recreate the dynamics of our visual world is essential to building intelligent agents," the researchers said in their paper. "Apart from purely scientific interests, learning to synthesize continuous visual experiences has a wide range of applications in computer vision, robotics, and computer graphics."
Of course, this is all research at the moment, and you need a supercomputer to do it. But it paints a future in which AI is helping humans, not scaring poo-water quaffing tech luminaries. µ
Now you can watch documentaries about horribly disfigured people whenever you like
Brad to the bone
Being in a minority of one doesn't make you right
WeWork needs a rework