SMART SOFTWARE rising up and borking the world is increasingly becoming a worry for some, so it's no surprise Google has created an external advisory board to help prevent unethical AI use.
Dubbed the Advanced Technology External Advisory Council (ATEAC), the boffin-based board will work to scrutinise the development of AI to ensure that its development in the real-world is socially beneficial and doesn't infringe upon the ethical values of societies and companies.
Rather than look for Skynet-like situations and the threat of robots turning humanity into pets or goo for fuel, the advisory board will carry out more realistic scrutinising, such as addressing things like bias in machine learning algorithms and the use of facial recognition technology and its effects on privacy.
"This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work," said Kent Walker, senior vice president at Google Affairs.
"We recognise that responsible development of AI is a broad area with many stakeholders. In addition to consulting with the experts on ATEAC, we'll continue to exchange ideas and gather feedback from partners and organisations around the world," Walker added, which suggests Google will share its AI insights with other firms.
Unless you're big into AI academic research you're not likely to recognise many names on the ATEAC, but suffice to say it's full of high-calibre boffins such as Joanna Bryson, who has a background in AI ethics.
Having an independent board for AI ethics could help Google avoid sticky situations such as the debacle surrounding Project Maven, a drone programme involving the US Department of Defence, which Google workers took exception to. Though this arguably shows that Google can't trust itself to not use AI in an ethical way without any oversight. µ
You're not the voice, try and understand it
Not 'Appy bunnies
News reaches us, per Plex