THE SOCIAL NETWORK Facebook on Friday announced plans to bring its anti-extremism 'Online Civil Courage Initiative' (OCCI) to the UK, following a spate of terrorist attacks in the country.
The OCCI, which launched in France and Germany earlier this year, offers funding and training to non-government organisations (NGOs) that are trying to tackle hate speech on the social network.
While full details of the initiative remain unclear, Facebook said that it would effectively give free ads to NGOs trying to counter hate speech, and will help fund academic research into "online and offline patterns" of extremism
Facebook's chief operating officer Sheryl Sandberg said in a statement: "The recent terror attacks in London and Manchester —like violence anywhere — are absolutely heartbreaking. No one should have to live in fear of terrorism - and we all have a part to play in stopping violent extremism from spreading."
"The UK Online Civil Courage Initiative will support NGOs and community groups who work across the UK to challenge the extremist narratives that cause such harm. We know we have more to do - but through our platform, our partners and our community we will continue to learn to keep violence and extremism off Facebook."
For its UK launch of the OCCI, Facebook has partnered with the anti-Islamophobia group Tell MAMA, Imams Online and the Jo Cox Foundation.
Brendan Cox, the widower of murdered MP Jo Cox and the founder of the Jo Cox Foundation, has welcomed the move.
"This is a valuable and much-needed initiative from Facebook in helping to tackle extremism," he said.
"Anything that helps push the extremists even further to the margins is greatly welcome. Social media platforms have a particular responsibility to address hate speech that has too often been allowed to flourish online.
"It is critical that efforts are taken by all online service providers and social networks to bring our communities closer together and to further crack down on those that spread violence and hatred online."
Just last week, Facebook announced that it would use artificial intelligence (AI) in a bid to help it better tackle terrorist content, a move that came just days after UK PM Theresa May said that social media firms should be fined for failing to remove extremist content.
Facebook's use of AI to tackle such content will see it use 'image matching' technology, which will see images automatically removed if it matches a post that has already been flagged up to the firm as extremist propaganda.
Facebook will also use AI for analysing text that praises or supports terrorist organisations, for removing terrorist clusters, to detect and close down recurring fake accounts with the purpose of spreading terrorism and for cross-platform collaboration, which will see the same accounts banned from accessing WhatsApp and Instagram. µ
Crapsicab firm says bug 'isn't particularly severe'
4.15 follows shortly
Lithium-metal batteries are lighter and hold more juice
Loved up... but weighed down with debt