ELON MUSK is the headline act on a motion to the United Nations (UN) which sees 116 founders of artificial intelligence (AI) and robotics companies calling for a worldwide ban on autonomous weapons.
The open letter explains: "Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
"These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.
"We do not have long to act. Once this Pandora's box is opened, it will be hard to close."
The letter has been co-signed by companies on five continents, including Google-owned DeepMind in London, and Geometric Intelligence, an Uber owned company, amongst a host of other names you'll probably recognise.
But this isn't the stuff of science fiction - it's the stuff of science fact, with countries including the US, China and right here in the UK developing autonomous weapons systems, according to data from Human Rights Watch which has been campaigning to ban killer robots since 2013.
Musk has long spoken on behalf of his species about the dangers of AI which, if misused could, he believes, wipe out the human race.
He has also warned that the future of human survival could come down to making cyborgs of ourselves. We should be clear, that's "ourselves" as a race, we're not talking about fitting your Auntie Betty with a hoover attachment. Yet.
Recently, Captain Boring had something of a Twitter spat on the subject with Facebook founder Mark Zuckerberg, who has cast doubt on the danger posed by robots.
But here in the UK, Ray Chohan, SVP of Corporate Strategy at PatSnap, which specialises in the analysis of tech patent data, said that it was how our personal data is used, not the big-badda-boom that had occupied researchers most.
"As the world's largest organisations gather personal data, and apply machine learning and artificial intelligence to it to gain insight, it would appear where we are most at threat, at least for the foreseeable future, is how companies use our personal data.
"Government regulations can play a huge part in governing the decisions of companies' R&D investment. For example, artificial intelligence patents that concerned ethics or morality saw a spike in 2014, a year in which the GDPR was being widely discussed in the EU, particularly around the ‘right to be forgotten."
The concerns of the letter seem to indicate that the issues particularly surround the tech getting into the wrong hands - despot states, private armies, terrorists and of course Ernst Blofeld. µ
Yes. No. Yes. Good. Very Strong
Is it coz it is hack?
No reference points. No mercy
Google Play may need a new door man