GOOGLE HAS been setting out its policy on AI weaponry and surveillance, after the whole Pentagon contract debacle. But it's not a blanket ban.
In a blog post, Sundar Pichai explains that Google will still work with governments in areas like training and cybersecurity, but other things will be limited. The company won't, for example, work on surveillance that falls outside "internationally accepted norms" or anything dangerous unless "the benefits substantially outweigh the risks".
Google's drone-footage surveillance program that got them into this mess won't be renewed next year, after multiple staff resignations and an employee petition signed by thousands.
Google employees said that they were not interested in being part of the "business of war".
But Pitchai's promises are subjective and open to interpretation. No more than defining exactly what "don't be evil" means, but it seems that the tech world is satisfied with principles, not always what they mean in context.
And yet, Pichai's document is described as showing "concrete standards". So concrete in fact that, as pointed out by CNBC, they still allow hella wiggle room for Google Cloud to go chasing after defence contracts.
Google argues that it needs to be able to allow for the unexpected, like Tensorflow, which being open source, could be adapted in ways that fall well outside its remit. Except that includes the current Pentagon contract which uses... you guessed it.
But it will continue (it says) to work to "limit potentially harmful or abusive applications" of AI. Again, pretty vague as concrete goes.
Broadly speaking, the rules laid out state that anything that could cause harm, will as stated be limited to where benefit outweighs risk and even then use appropriate safety restrains.
It won't work on weapons or other technologies that are meant to cause injury to people. So that's it for the Mrs Browns Boys Google Maps add-on.
It won't work on surveillance outside international norms (but define that - China would have a different opinion on that to Germany).
It won't work on anything that contravenes "widely accepted principles" of human rights.
But there's still the disclaimer that "we will continue our work with governments in the military in other areas".
Taking one example - pharma - there's nothing to stop medical research by Google being weaponised without Google's involvement, so as worthy as this all is, it's impossible to completely control.
At least, however, Google is doing as much as it can to try and be a gatekeeper. After all, you catch more flies with honey than with a ruddy great hyper-intelligent rocket launcher with 74 cameras attached to it and capable of spitting anthrax. µ
Freelance hub could be getting the LinkedIn treatment
Alongside cheaper 'Lockhart' console, apparently
UK startup and NHS partner urges a bit of common sense is needed
The best Surface Pro yet but not much of an upgrade