ARTIFICIAL INTELLIGENCE (AI) is just waiting to be exploited for malicious use, security experts have warned.
The Malicious Use of Artificial Intelligence report, authored by a handful of experts spanning academia, civil society and the tech industry, has called for governments to consider new laws that limit the scope for AI to carry out nasty stuff like manipulation of public opinion and automated hacking.
The report acknowledges that AI has its benefits, but the experts want AI engineers to be aware that the technology can be misused and be proactive in putting in measures in place to prevent clever computers for being manipulated for no good.
And the report wants more people involved in the finding ways to mitigate the risk AI could pose if the wrong people got their hands on it. This includes getting policymakers in on the act to work closely with boffins to figure out ways to prevent the malicious use of AI.
The experts also said lessons could be learnt from other "dual-use" technologies such as cybersecurity, whereby combating hacker threats are very closely related to hacking itself, and take the best practices form those areas and apply them to AI development.
"The proposed interventions require attention and action not just from AI researchers and companies but also from legislators, civil servants, regulators, security researchers and educators. The challenge is daunting and the stakes are high," the report said.
In a somewhat Black Mirror fashion, the experts pointed out some alarming ways AI could be manipulated.
They suggested that autonomous cars could be used to crash into targets or drones could be trained with image recognition to target an individual and plough into them. And the ability of AI to crunch through computer tasks far faster than any human could open the door for hackers to use smart tech to automate hack attacks such as phishing.
Furthermore, despot governments could use AI to carry out mass surveillance or create automated and targeted propaganda, as hinted at by Russian election manipulating Twitter bots.
"We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems," the report said ominously.
This is all potentially worrying stuff and that's before we take into account that AI could see many people lose their jobs if it becomes smarter and more widely spread. Perhaps Bill Gates had a point after all... µ
Another week of weird and wonderful Google news
Rumours claim event could see the launch of new iPads and MacBooks
There will now be just five bidders, Ofcom confirms
Why I just don't think we're ready for alternative realities at home