"A LIE CAN get halfway around the world before the truth as got its shoes on," the old saying goes. Now a lie doesn't even need to worry about footwear, being carried effortlessly from conspiracy theorist to conspiracy theorist before nonsense becomes the accepted truth. It's safe to say that the reservoir of good will social networks generated via the Arab Spring is running a little dry eight years later.
The alarming extent of this problem is laid bare by a new study from Indiana University, which examined how 400,000 "low-credibility" articles were spread in more than 14 million messages in a one-year period. The verdict? Six per cent of bots are responsible for 31 per cent of the fake news plaguing the platform. If you think that's impressive, consider that they managed this in just two to ten seconds and it's hard not to admire the bots' effectiveness.
The study's co-author Giovanni Luca Ciampaglia thinks the effectiveness of said bots isn't that they're convincing in isolation, but that in large numbers they give the illusion of a popular news item, making it appear more trustworthy.
He told Ars Technica: "People tend to put greater trust in messages that appear to originate from many people. Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them."
Once this initial false reliability has been established by volume, real people begin to share it and things really take off. Indeed, the authors uncovered a class of bot that specifically target real people with large follower volumes in an effort to get them to share the nonsense.
On highly charged political issues, that isn't too hard either: partisan types on all sides do like to amplify things that reinforce their beliefs, and fact-checking and critical thinking just slows that process down. It's what study co-author Filippo Menczer calls the "useful idiot" paradigm. Useful to the bots' masters, that is, not useful to society as a whole.
That's unfortunate because this ability to spread messages quickly across social networks can be an unambiguous force for good when harnessed correctly. Bots could amplify messages to keep people safe in the face of terror attacks, or natural disasters. Unfortunately, right now they're too busy needling our prejudices with ruthless efficiency. µ
The week in Google in brief
Sega hedgehogging its bets
And not a purple duck in sight