ARTIFICIAL INTELLIGENCE (AI) IS PREJUDICE against anyone who isn't male and white, thanks to built-in bias.
At least that's according to research conducted by Massachusetts Institute of Technology and Stamford University, where a team of boffins examined three AI-powered facial recognition systems and broke down the accuracy of the results served up into gender and race.
Through the analysis of 1,200 images that contained more women and people of colour and skin tones based on Fitzpatrick classification scale than white males, the researchers put the image recognition systems to the test.
They found that the number of incorrect classifications of light-skinned men was never worse than 0.8 per cent, while the error rate for dark-skinned women rocketed to more than 20 per cent in one classification case and soared above 34 per cent for two other Fitzpatrick cases.
The researchers noted that such classification bias comes from the data sets used to train the deep learning neural networks that underpin the AI part of image recognition.
According to the researchers' paper, one "major US technology company" had a data set that was more than 77 per cent male and 83 per cent white, thereby making it naturally better or indeed more biased at picking out lighter-skinned men than darker-skinned women.
"What's really important here is the method and how that method applies to other applications," says Joy Buolamwini, researcher in the MIT Media Lab's Civic Media group and first author on the paper.
"The same data-centric techniques that can be used to try to determine somebody's gender are also used to identify a person when you're looking for a criminal suspect or to unlock your phone. And it's not just about computer vision. I'm really hopeful that this will spur more work into looking at [other] disparities."
Such bias is sadly nothing new, with Google's image recognition in its Photos app making a massive classification gaff that highlights a problem of bias that seems to get programmed into AI systems.
The cause of this is almost certainly a lack of diversity in the workforce of tech giants, which despite diversity efforts are predominately white and male.
As such, when preparing data sets for AI training and coding the algorithms for such systems, there's likely to be a degree of bias, whether that's active or unconscious bias; we'd like to think that latter as we hope people working for tech giants would have a progressive and inclusive mindset.
The researchers noted that the solution to this problem is a diverse workforce or at least having programmers and AI engineers to be more aware of the diverse audiences their systems will cater for.
Of course, that's easier said than done as even the best intentions of tech firms can still see them struggle to get simply maintain a balance of men and women let alone expand diversity in more, well, diverse ways. µ
It's the week in Google
You can probably guess which
GPU is available in Blighty now for £260
Move could bring Halo and Gears of War to the hybrid console