GRAPHICS OUTFIT Nvidia has used artificial intelligence (AI) to create a system that cleans up grainy photos.
Joining forces with boffins from MIT and Alto University, Nvidia's AI system, dubbed Noise2Noise, is able to remove 'noise' from grainy photos - usually caused when shooting in poor light - without ever seeing a 'clean' photo.
The system does this using deep learning, a form of machine learning that essentially picks apart data in much the same way as a human brain and is something Nvidia has a tech hard-on for.
With Nvidia's Tesla P100 GPUs and the TensorFlow deep learning framework, the AI was trained on 50,000 noisy images taken from the ImageNet dataset. From there, it learnt what 'noise' was and how to remove it, creating photos that are pretty close to how the original should look like, albeit with a little less detail and some blur.
"It is possible to learn to restore signals without ever observing clean ones, at performance sometimes exceeding training using clean exemplars," the researchers stated in their paper.
"[The neural network] is on par with state-of-the-art methods that make use of clean examples — using precisely the same training methodology, and often without appreciable drawbacks in training time or performance."
Using AI to clean up photos is nothing particularly new, and the likes of Google already have AI-powered photography features to make your shoddy snapping look decent. But a lot of the time, these systems need to be trained using both clean and noisy pictures, which not only is more work intensive, but such data can be hard to source.
"There are several real-world situations where obtaining clean training data is difficult: low-light photography (e.g., astronomical imaging), physically-based rendering, and magnetic resonance imaging," the boffins said.
"Our proof-of-concept demonstrations point the way to significant potential benefits in these applications by removing the need for potentially strenuous collection of clean data. Of course, there is no free lunch - we cannot learn to pick up features that are not there in the input data - but this applies equally to training with clean targets."
So yeah, this is some pretty advanced AI tech but it's still only a proof-of-concept, meaning we can't go expecting to find it plonked into the next wave of digital cameras and smartphones. Still, it once again shows how AI development is advancing at quite a lick, which will probably give the uber-confident Elon Musk something to worry about. µ
What could possibly go wrong...
Committee clams firm failed to implement 'adequate security'
Meme Ban means Meme Ban
It's anonymous data at first but the NYT figured out how to make it personal