Facebook says using AI to fight terrorism on its platform

Francis Harris
June 16, 2017

Other questions, he said, include: "Is social media good for democracy?" The first post addresses how the company responds to the spread of terrorism online.

"We don't want terrorists to have a place anywhere in the family of Facebook apps".

Additionally, specialized training, partner programs, industry collaboration, and government partnerships all play a role in Facebook's work against terrorists online. Following attacks in London and Manchester in the past four months, UK Prime Minister Theresa May pressed other leaders from the Group of Seven nations to consider further regulation of social media companies to compel them to take additional steps against extremist content.

"It's also essential for journalists, NGO workers, human rights campaigners and others who need to know their messages will remain secure". This new emphasis from Zuckerberg has followed uproar over Facebook's role in the proliferation of false news accounts during the US election campaign past year, as well as the spread of extreme content, such as videos of murder, posted to Facebook. "We want Facebook to be a hostile place for terrorists", the blog post reads. It has more than 150 people primarily focused on countering terrorism, including academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers. And if a threat is imminent, a separate Facebook team communicates with law enforcement.

Facebook - along with Microsoft, Twitter, and YouTube - is bound to review and remove reports of illegal hate speech in the EU.

Criticized by some to be a platform for propaganda or even recruitment by terrorists, the USA company headquartered in Menlo Park, Northern California, pledged that it is "absolutely committed to keeping terrorism off our platform". It says it has already reduced the time fake accounts are active.


Facebook believes that it can do better to find and stop terrorists from sharing their contents on its platform by using technology particularly artificial intelligence.

YouTube, Facebook, Twitter and Microsoft past year created a common database of digital fingerprints automatically assigned to videos or photos of militant content to help each other identify the same content on their platforms.

Video fingerprinting called "hashes" are being used to help Facebook's algorithms find and ultimately terminate extremist videos before they're ever posted or made public.

Facebook's announcement comes on the heels of a blog post from Facebook policy chief Elliot Schrage which pledged the company would begin to "talk more openly about some complex subjects", including how platforms should fight the spread of terrorist "propaganda" online. May has opposed freely available end-to-end encryption in the past, which makes communication essentially inaccessible to third parties and is a key feature of the Facebook-owned WhatsApp.

And though Bickert and Fishman attributed Facebook's moves to "questions" over recent terror attacks, Farid pointed out that the attacks spurred more than just questions.

When material is identified and removed, algorithms "fan out to try to identify related material that may also support terrorism".

Other reports by TheDigitalNewspaper

Discuss This Article

FOLLOW OUR NEWSPAPER