UBER IS NOT the first company that you think of when it comes to artificial intelligence (AI), but like all big tech companies, it's too important to ignore. After all, Uber has self-driving car ambitions and the tech behind its app requires it - and if some are good, more must be better.
But in these fledgeling days, sometimes its best to return to the past to create the future.
One of the biggest challenges has been conquering a pair of classic 8-bit video games - Montezuma's Revenge and Pitfall! - mostly because both games don't follow the usual rules of 'if you do this, then this is your reward', meaning the AI can't re-enforce what it learns.
Such is its status as a holy grail that DeepMind attributes it as part of the reason that AlphaGo was so flipping good at beating humans.
But a team from Uber, led by Jeff Clune from the University of Wyoming, have developed a more reliable way to get AI to learn that has seen it beat the issues that has kept other AI from scoring more than a UK Eurovision score.
The change is pretty simple. Known as "Go-Explore", in a nutshell, it teaches the neural network that it's ok to go back to areas that it might have forgotten before. That means if it finds a locked door, and later goes off and finds a key, it doesn't dismiss that locked door as a bad idea anymore and that it's worth checking to see if it now opens.
The results have been striking, with learning speed noticeable improved, and as a result, it can now beat most human players on the same game.
They're not the only team working on the problem, but so far it's Uber that has harnessed the best results.
Practical applications are legion, because this represents a breakthrough in making more human-like decisions. And when AI can think better than a human (which is still a long way off) then, well, the Skynet's the limit.
It also might help teach it that when humans are drunk, they won't stay drunk forever. Unless they work for INQ, obviously. μ
Not all it's Mac'd up to be
X marks the smart home
The lens said the better