GOOGLE HAS made a tool available that it hopes will aid devs in the battle against trolling and harassment.
Jigsaw, a long standing project in Google's arsenal uses machine learning technology to spot behaviours online that shouldn't be.
Its latest tool, called Perspective, measures ‘toxicity' of comments based on trawling through millions of existing rants and ‘teaching' the software by serving them up to volunteers to grade.
With publishers including Wikipedia, The Guardian, The Economist and The New York Times already testing the tool, it could become a powerful weapon in the battle against online troublemakers. Alternatively, one could argue it might be a barrier to free speech.
Jigsaw is keen to point out that this is meant to be a tool of indication and that humans need to have the final say on whether publication happens.
At INQ, the Disqus commenting system blocks certain words, but it doesn't delete them, rather flagging it for us to check. Perspective is simply a more complex version of that. It's designed to flag the need to reply, not necessarily to block the comment.
Other users have noticed that it seems to dislike anything in Arabic - regardless of the content, which means it has somehow picked up from testers that Arabic is a bad thing. Echoes of Microsoft Tay, which learned prejudice from its users.
At the moment the tech is still in its infancy. Say, for example, "You INQ guys are spreading bullsh&t fake news!" would be picked up, but "I never listen to Carly's reviews" would not, because it's too soon for it to have learned to tell whether that's down to sexism or expressing a preference.
If you'd like to try it, the Perspective website includes a demo of the API where you can vent bile and test it for ‘toxicity'. All the more reason not to do it here, yo? µ
'Ah - yes - we're ignoring your wishes for a reason there, leave it alone'
And, er, not much else
To serve, protect, and get incredibly hot and dusty
Symantec links attack to prolific Lazarus hacking group