SAN JOSE: CHIP DESIGNER Nvidia said at its GPU Technology Conference (GTC) today that it will use its Chimera technology to route data from other sensors, not just cameras.
Nvidia's Chimera technology was announced alongside the firm's Tegra 4 system on chip (SoC), and it constantly processes data from the camera's imaging sensor. The firm said it expects to connect other sensors to the Chimera architecture for tasks such as gesture recognition.
Neil Trevett, VP of mobile content at Nvidia said that the Chimera architecture is being used in the area of computational photography with constant high dynamic range image processing. Trevett described Chimera as "being able to take sensor data and route it to the right place", adding that it can bypass traditional image pre-processing hardware.
While Nvidia is using Chimera with the camera as the source of its data, Trevett said "more sensor oriented hardware blocks could be plugged into Chimera". He went on to cite sensors such as infrared and depth scanners as examples.
As for what Nvidia will allow developers to do with the sensor data, Trevett showed off Nvidia VSX, which he said will allow vision processing, tap-to-track and panoramic printing, among other use cases. Trevett didn't say when Chimera's functions will be expanded to go beyond processing data from the camera.
Nvidia's Chimera architecture could allow its Tegra SoCs to gain a foothold in high-end smartphones and tablets that will increasingly rely on sensor data to provide next generation user interfaces. µ
Manual camera controls, user accounts, Apple Pay improvements and more
How does Canonical's Ubuntu OS fare on mobile?
The top 10 stories from the past seven days
SoC will debut in Google Daydream-compatible devices