MORE THAN 1,000 Artificial Intelligence (AI) researchers and experts have signed an open letter condemning the rise of autonomous weapons.
The letter, presented at the 2015 International Joint Conference on AI in Buenos Aires, warns of a potential "military AI arms race" in the future if something isn't done, and calls for a ban on "offensive autonomous weapons".
It is signed by high-profile technology personalities, such as Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind CEO Demis Hassabis and Stephen Hawking, among hundreds of other AI and robotics researchers.
It looks to nip in the bud any development of autonomous weapons which have the ability to select and engage targets without human intervention.
The letter noted that this could include armed quad-copters that can search for and eliminate people meeting certain pre-defined criteria.
The concern has come about now because the experts believe that AI technology has reached a point where the deployment of such systems is "feasible within years, not decades", and "the stakes are high".
It even goes as far as saying that autonomous weapons are the third revolution in warfare, after gunpowder and nuclear arms.
"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," reads the letter.
"If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."
It is alleged that AI weapons are particularly dangerous, more so than nuclear weapons, because they are cheap to make, require no costly or hard-to-obtain raw materials, and thus can become ubiquitous and easy for all significant military powers to mass-produce.
As a result, the AI experts and industry figures say in the letter that it will only be a matter of time before these sorts of weapon appear on the black market and in the hands of terrorists wishing to better control citizens.
"Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group," the letter reads.
"We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.
"We believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control."
Toby Walsh, professor of AI at the University of New South Wales and NICTA, said that, while AI can be used to help tackle many of the pressing problems facing society today - inequality and poverty, the rising cost of healthcare, the impact of global warming - it can also be used to inflict unnecessary harm.
"We need to make a decision today that will shape our future and determine whether we follow a path of good," he said.
"We support the call by a number of different humanitarian organisations for an UN ban on offensive autonomous weapons, similar to the recent ban on blinding lasers."
AI is notorious for more than just its ability to lead to the development of autonomous weapons. Musk and Wozniak have both warned that the development of increasingly intelligent computer systems could get out of our control and signal the end of the world.
Musk, who probably had not spent the weekend watching Demon Seed or Transformers, made his concerns clear that the more advanced the robots get the less respect for their creators they will have.
"I don't think anyone realises how quickly AI is advancing. People really have no idea. If there is a super intelligence, and its utility function is detrimental to humanity, it will have a very bad effect," he said.
He added that humans could be viewed as the "source of all spam" and that the AI overseer may nip us in the bud.
Wozniak added to Musk's comments with claims that computers are going to take over from humans, saying during an interview with the Australian Financial Review that humans will become robots' pets.
"Computers are going to take over from humans, no question. Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people," he said.
However, it's not all doom and gloom. Linux co-founder Linus Torvalds said more recently that we have nothing to fear from AI, dismissing the dramatic remarks from the likes of Musk, Hawking and Wozniak.
Torvalds made his views on AI plain when a Slashdot user quizzed him as to whether he thinks it will be a "great gift" to mankind or a potential danger. He replied that those scared of AI must be "on drugs".
"The thing is, since that kind of AI will need training, it won't be 'reliable' in the traditional computer sense. It's not the old rule-based prolog days, when people thought they'd 'understand' what the actual decisions were in an AI," he said.
In a less aggressive, positive response to AI, computer science experts declared in May that advances in AI could be responsible for the next industrial revolution as intelligent computer systems replace certain human-operated jobs.
The scientists said during a panel debate hosted by ClickSoftware about the future of technology that it could lead to a "hollowing out" of middle-income jobs.
"It's really important that we take AI seriously. It will lead to the fourth industrial revolution and will change the world in ways we cannot predict now," AI architect and author George Zarkadakis claimed. µ
You can't fault them for speed
Investigation reveals that malicious code was injected into the firm's payment page
Plus the three-for-free
And it's not just on Ubuntu, neither