In July 2021, some Facebook users reported seeing new warning messages from site administrators about "harmful extremist content" or asking whether they felt someone they knew was "becoming an extremist."
The reports authentically captured a new Facebook-led pilot initiative that attempted to help site administrators identify users or content that breached Facebook's existing bans on "objectionable content." For that reason, we rate this claim "True."
According to reputable news outlets, the new messages were appearing for some users when they used the desktop or mobile version of Facebook. One notice took the form of a question, "Are you concerned that someone you know is becoming an extremist?" while the other warned, "you may have been exposed to harmful extremist content recently."
Another alert read, according to CNN: "Violent groups try to manipulate your anger and disappointment. [...] You can take action now to protect yourself and others."
Facebook said the small test, which is only on its main platform, was running in the United States as a pilot for a global approach to prevent radicalization on the site.
'This test is part of our larger work to assess ways to provide resources and support to people on Facebook who may have engaged with or were exposed to extremist content, or may know someone who is at risk,' said a Facebook spokesperson in an emailed statement. "We are partnering with NGOs [non-governmental organizations] and academic experts in this space and hope to have more to share in the future.'
We reached out to the tech giant's communication team ourselves to discuss the notices, but we have not yet received a response. We will update this report, when, or if, that changes.
All messages gave recipients the option to "get support" from one or more Facebook partners. A Facebook spokesperson told CNN they included Life After Hate, an advocacy group that helps people leave violent far-right movements.
According to Reuters, the effort was part of Facebook's promise to counter extremist content after a user live-streamed himself opening fire at a New Zealand mosque — killing 51 people — in March 2019.
Andy Stone, a spokesperson for Facebook, confirmed the factualness of CNN's article about the news messages with the below-displayed tweet.
Ultimately, the new warnings were a response to critics who believe the tech giant could have done more to prevent users from circulating false claims during the 2020 election and planning violence like the Jan. 6 Capitol insurrection.
In March, for example, Avaaz, a nonprofit that seeks to curb misinformation, uncovered at least 267 pages or groups with connections to that it said spread violence-glorifying content — some of which remained active despite Facebook's efforts to block users from such material.
Snopes has also identified such groups that have seemingly evaded the site's attempts to ban violent rhetoric, including a militia group in which users discussed plans to open fire on “any one that starts rioting” after the police shooting of a Black man, Jacob Blake, in Kenosha, Wisconsin.
[See here for exclusive Snopes analysis titled, "Violence Brewed in Facebook Groups Ahead of 'Stop The Steal' Protests"]