Facebook has announced a new policy that allows it to take out networks of accounts engaging in “coordinated social harm.” The company said the change could help the platform fight harmful behavior it wouldn’t otherwise be able to fully address under its existing rules.
Unlike “coordinated inauthentic behavior,” which is Facebook’s policy for dealing with harm that comes from networks of fake accounts, coordinated social harm gives the company a framework to address harmful actions from legitimate accounts. During a call with reporters, the company’s head of security policy Nathaniel Gleicher said the policy is necessary because bad actors are increasingly trying to “blur the lines” between authentic and inauthentic behavior.
“We are seeing groups that pose a risk of significant social harm, that also engage in violations on our platform, but don't necessarily rise to the level for either of those where we’d enforce against for inauthenticity under CIB [coordinated inauthentic behavior] or under our dangerous organizations policy,” Gleicher said. “So this protocol is designed to capture these groups that are sort of in between spaces.”
Gleicher added that the new protocols could help Facebook address networks of accounts spreading anti-vaccine misinformation or groups trying to organize political violence. In announcing the change, Facebook said it took down a small network of accounts in Germany that were linked to the “Querdenken” movement, which has spread conspiracy theories about the country COVID-19 restrictions and has been “linked to off-platform violence.”
Facebook said it could take “a range of actions” in enforcing its new rules around coordinated social harm. That could include banning accounts — as it did with the “Querdenken” movement — or throttling their reach to prevent content from spreading as widely.
The issue of how to handle groups that break Facebook’s rules in a coordinated way has been a difficult one for the company, which up until now has primarily focused on taking down networks that rely on fake accounts to manipulate its platform. The issue came up earlier this year following the January 6th insurrection as Facebook investigated the “Stop the Steal” movement. According to an internal report obtained by BuzzFeed News, Facebook employees suggested its existing policies weren’t equipped to handle “inherently harmful” coordination by legitimate accounts, which prevented it from realizing “Stop the Steal” was a “cohesive movement” until it was too late.
During a press call, Gleicher said that the “work on this policy started well before January 6th.” But he added that the company’s work against high-profile groups had informed their decision making. “If you think about our enforcement against QAnon-related actors, if you think about our enforcement against ‘Stop the Steal,’ if you think about our enforcement against other groups — we learned from all of them.”