THE SOCIAL NETWORK Facebook has announced that it'll enlist artificial intelligence (AI) in a bid to help it better tackle terrorist content.
The move comes just days after UK prime minister by default Theresa May said that social media firms should be fined for failing to remove extremist content.
"We agree with those who say that social media should not be a place where terrorists have a voice," wrote Monika Bickert, Facebook's director of global policy management, and Brian Fishman, its counterterrorism policy manager, wrote in a blog post.
"We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission. Our stance is simple: There's no place on Facebook for terrorism."
Facebook's use of AI to tackle such content will see it use 'image matching' technology, which will see images automatically removed if it matches a post that has already been flagged up to the firm as extremist propaganda.
"When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video," the blog post explains. "This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site."
Facebook will also use AI for analysing text that praises or supports terrorist organisations, for removing terrorist clusters, to detect and close down recurring fake accounts with the purpose of spreading terrorism and for cross-platform collaboration, which will see the same accounts banned from accessing WhatsApp and Instagram.
Because Facebook's AI systems aren't that advanced yet, human moderators will also be used to ensure that legitimate speech isn't accidentally removed from the social network.
Facebook's blog post explains: "A photo of an armed man waving an ISIS flag might be propaganda or recruiting material but could be an image in a news story. Some of the most effective criticisms of brutal groups like ISIS utilise the group's own propaganda against it. To understand more nuanced cases, we need human expertise."
These human moderators, of which Facebook will hire a further 3,000, will also be used to review reported posts by users and determine whether they should be taken down.
"We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real world communities - to get better at spotting the early signals before it's too late," Facebook's blog post concludes.
"We are absolutely committed to keeping terrorism off our platform, and we'll continue to share more about this work as it develops in the future." µ
Console's prospective 'Spring 2018' launch date is in jeopardy
Claims chips can deliver up to 11.5 petaflops of processing power
A Pai in the face - but it's the FCC that are clowns
Firm offers refund for 'impossible' glitch