ARTIFICIAL INTELLIGENCE PIONEER DeepMind has announced the formation of an ethics division to research the moral and societal consequences of AI.
The DeepMind Ethics and Society unit has been set up by the Google sister company, with the aim of producing papers that match the quality of their existing AI work, but also set benchmarks and goals to put ethics into practice.
The meeting of minds includes Director of the Future of Humanity Institute, Nick Bostrom, the economists Diane Coyle and Jeffrey Sachs, and Christiana Figueres, a world authority on climate change.
In a DeepMind blog post, the company explains:"As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work.
"At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face."
Although most famous for being rather good at the board game 'Go', DeepMind is working in a huge variety of areas and looking at how to bring AI to them.
It mirrors what is happening elsewhere at Alphabet, with ‘machine learning' being one of the most used phrases at yesterday's Google Pixel 2 hardware launch.
The post continues: "If AI technologies are to serve society, they must be shaped by society's priorities and concerns. This isn't a quest for closed solutions but rather an attempt to scrutinise and help design collective responses to the future impacts of AI technologies. With the creation of DeepMind Ethics & Society, we hope to challenge assumptions—including our own—and pave the way for truly beneficial and responsible AI."
Subjects being tackled range from the outward-looking wider consequences of AI on society to the more soul-searching research into possible racist bias in algorithms.
In all cases, these are hot-button world-changing issues that have to be resolved before AI goes too far the wrong way. µ
But we probably won't see it until next year
Why stick a finger in a dyke when you can ram the entire boy in the hole, eh?
Reminds us that we're supposed to be able to trust them
'Exclusive' model starts shipping on 29 June