GOOGLE'S ARTIFICIAL INTELLIGENCE (AI) ARM, DeepMind, has taken on a new form.
The super-duper-supercomputer which beat the human world champions at ancient Chinese board game Go has learned how to play it without ever being taught.
AlphaGo was the program which DeepMind used to conquer the world, but now AlphaGo Zero has eschewed example games played against humans and data from human to human matches.
This time, it had a blank game board and the rules. And it was left to it. Now what's really important to remember is that Go is simpler to learn than Chess. But, without wishing to get all philosophical about it, there are more legal moves that grains of sand in the desert, or bad Adam Sandler movies on Netflix.
Despite so many options, within 72 hours AlphaGo Zero could beat AlphaGo by 100 to nothing.
The original DeepMind computer, which won the honorific world title, is now in a display case at DeepMind's London offices.
"We're quite excited because we think this is now good enough to make some real progress on some real problems even though we're obviously a long way from full AI," Demis Hassabis, CEO of DeepMind told reporters.
The wider issue here is that in the past, neural networks have learned the rules from observing. Being taught the rules and left to learn how to put them into effect is a lot quicker - it took AlphaGo much longer to learn from other games, but at the same time, this technique has no checks and balances.
And as we've always known with computers - garbage in... garbage out (GIGO). So if AlphaGo is taught the rules wrongly, then it could be Skynet time. If that's your way of looking at things.
Fortunately, there's a recently launched ethics unit for that.
DeepMind is also involved in a bunch of other projects, including a centralised intelligent patient monitoring system in NHS hospitals. µ
Looks like we've finally pulled the PIN
Because Alexa doesn't need to be paid a salary
It'll rub shoulders with substance abuse
Cupertino could potentially be ready to reveal a driverless car system