BASTION OF VIRTUE Microsoft has been revealed to have rejected a request to use its facial recognition technology on human rights grounds.
Now you've finished spitting out your coffee in shock, we'll explain.
A report from Reuters suggests that a California Law Enforcement agency (CHiPs anyone?) had approached Microsoft about the installation of its facial recognition tech for vehicles and bodycams, but in a shocking show of ethics, Microsoft refused.
The problem seems to stem from an issue that we've talked about many times on INQ - AI bias. Microsoft concluded that because the vast majority of training data on facial recognition is based on white males, many facial recognition systems have developed a dislike of people of colour and women, and Microsoft rightly believes that is something that needs fixing before the feds get their mitts on it.
Company President Brad Smith confirmed the decision at a conference on AI at Stanford University: "Anytime they pulled anyone over, they wanted to run a face scan" he explains, but after looking at the potential bias, "we said this technology is not your answer."
He also confirmed that the tech had been denied to governors of an unnamed capital city which had wanted to connect it to its CCTV system. Smith suggested that it would have caused problems with any assembled groups, (which may have been considered political).
Microsoft is offering up the tech, but it has been restricted to closed environments, including an (also unnamed) prison.
Smith emphasised that Microsoft is not in the business of suppressing public freedoms or rights to protest, and so shy away from government-led blanket surveillance opportunities, as well as autonomous weapons.
Microsoft has refused to comment on any of the companies or governments involved. Still, we'd place a sneaky side-bet that the government probably starts with a "Chi" and ends with a "na". μ
Much a (dil)do about nothing
Neither the time nor the face
The tiny tweaks are coming thick and fast now
Gitting more secure