San Francisco: In a new technological development, Facebook is removing terror posts and helping to prevent suicides using Artificial Intelligence (AI) and Machine Learning (ML) techniques to analyse posts on the social media platform.
Facebook's AI tools help identify when someone might be expressing thoughts of suicide, including on Facebook Live.
The initiative to prevent suicide will use pattern recognition to detect posts or live videos where someone might be expressing thoughts of suicide to help authorities respond faster - will eventually be available worldwide, except the European Union, Facebook said in a blog post on Tuesday.
"Facebook is a place where friends and family are already connected and we are able to help connect a person in distress with people who can support them.
"We use signals like the text used in the post and comments (for example, comments like "Are you ok?" and "Can I help?" can be strong indicators).
"In some instances, we have found that the technology has identified videos that may have gone unreported," Rosen said.
Facebook has a team that includes a dedicated group of specialists who have specific training in suicide and self harm.
Removing terror posts
Facebook is also successfully removing all Islamic State (IS) and Al Qaeda-related terror content from its platform before anyone flags it using AI and ML techniques , the social media giant said on Wednesday.
"Today, 99 per cent of the IS and Al Qaeda-related terror content we remove from Facebook is content we detect before anyone in our community has flagged it to us, and in some cases, before it goes live on the site," Monika Bickert, head of Global Policy Management at Facebook wrote in a blog post.
Facebook does this primarily through the use of automated systems like photo and video matching and text-based machine learning.
"Once we are aware of a piece of terror content, we remove 83 per cent of subsequently uploaded copies within one hour of upload," added Brian Fishman, head of Counterterrorism Policy, Facebook.
Deploying AI for counterterrorism is not as simple as flipping a switch.
A system designed to find content from one terrorist group may not work for another because of language and stylistic differences in their propaganda.
The social media giant is currently focusing its techniques to combat terrorist content about Islamic State (IS), Al-Qaeda and their affiliates.
The use of AI against terrorism is increasingly bearing fruit, but ultimately, it must be reinforced with manual review from trained experts, said Facebook.
Facebook has announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT) where the social media giant is working with Microsoft, Twitter and YouTube to fight the spread of terrorism and violent extremism across their platforms.