EARLIER THIS WEEK, a Russian-Ukranian team announced that it had become the first to beat the legendary Turing Test - the threshold at which Alan Turing said that a machine can be said to be "thinking" because it is capable of fooling a human panel that it is real.
At the time, I was bouncing up and down with enthusiasm that soon we'd all be walking around with our mythical Plastic Pal Who's Fun To Be With there to do our bidding. But then it occured to me - artificial intelligence is a potential disaster area.
My concerns don't stem from being trapped in The Matrix or controlled by Skynet. The real terror is a lot simpler. Maybe I have a few trust issues, but aren't we all putting our already wafer-thin privacy at risk?
Let's skip over the fact that the Turing Test is fundamentally flawed. It's a 50-year-old measurement with a 30 percent passmark. Heck, if human intelligence tests had a 30 percent passmark, Katie Price would be at serious risk of being admitted to Mensa.
We'll also skip over the fact that a team of judges who knew that they were judging an artificial intelligence construct were in charge of deciding whether something that might be a machine was a machine might have been a little biased.
But all of these suspicions about the result withstanding, if we really have reached an age of artificial intelligence, can we actually trust it?
We are far more likely to entrust information to humans, or at least those we perceive as humans, and so we're also likely to be more comfortable giving our personal details to a seemingly sentient machine.
Not for the first time in these columns, I am looking to 2001: A Space Odyssey to make a point. Imagine that HAL9000 became a reality. I wouldn't be scared of him turfing me out of an airlock into the cold void of space. It'd be a good chance to get some peace and quiet to work on my novel.
Oh no, I'd have a bigger worry. Because if Arthur C. Clarke had been writing today, his concern would be that this "extra crew member" had been entrusted with all the secrets of the mission, the crew, and the government. And despite its warm and sunny disposition, it's still a computer. And computers can be hacked.
Part of the purpose of creating artificial intelligence is to create entities that we trust to do our dirty work for us - the stuff that we don't want to do, and the stuff that we're not best fixed to do. But if we can't even secure our websites, why on earth would we entrust Metal Mickey with our lives, when hackers can already find out in a heartbeat that the last thing I crowdfunded was an tea-towel steamer with NFC?
And its not just data breaches, either. If a hacker can get into an AI mind, then it can be reprogrammed to go rogue. HAL9000's malfunction wouldn't have been caused by a moral conflict in programming, but because a fundamentalist religious hacking group wanted to make sure man didn't discover that there was no god - or insert alternative deity or reality TV star here.
And hacking might not be that hard either. Because if we reach the point where a computer can reliably be recognised by a human as a fellow human, then what's there to stop a computer hacking someone's credentials and impersonating them in order to get the access it wants? That would be the ultimate identity theft.
The real concern is not whether or not humans can mistake computers for humans. It's whether the computers can mistake other computers for humans.
There, that's got you thinking, hasn't it.
Incidentally, the tea-towel steamer with NFC hasn't really been produced yet, but if you're thinking about it, I own the patent and my robot lawyers are watching. µ