Facebook is finally taking an assertive step toward combating misinformation, one that’s been on the wishlist of policy experts for years. | Andrew Matthews/PA Images via Getty Images
The company will point users who have interacted with harmful information about the coronavirus to the WHO.
Facebook just announced another step in trying to set the record straight about misleading information about the novel coronavirus on its platform. In the coming weeks, the company will start directing people who have previously “liked,” reacted, or commented on “harmful misinformation” about Covid-19 to information from more authoritative sources, such as a myth-busters website by the World Health Organization (WHO).
This represents one of the first times Facebook will warn a specific set of users who have interacted in the past with false information. Experts have argued for years that the social network is rife with misleading information that taints public discourse and asked Facebook to take the retroactive approach.
“We want to connect people who may have interacted with harmful misinformation about the virus with the truth from authoritative sources in case they see or hear these claims again off of Facebook,” wrote Guy Rosen, Facebook’s vice president of integrity, in a company blog post released on Thursday. The notifications will apply only to Facebook and not its other platforms like Instagram and WhatsApp.
Since the beginning of the Covid-19 crisis, social media users have been posting popular and dangerous hoaxes about the virus, including false cures and myths about the origins of the outbreak. In response, companies like Facebook, YouTube, and Twitter have been stepping up their measures to flag and sometimes delete this type of content. In March alone, Facebook says it labeled 40 million posts as “false” on its network, relying on its team of independent third-party fact-checkers. Still, plenty of bad information that fact-checkers don’t catch regularly slips through the cracks — or is caught only after tens of thousands of users have already seen the posts in question.
Facebook included a screenshot of a new News Feed message that will show users who have interacted false coronavirus information, pointing them to WHO resources.
The design Facebook provided looks more like a gentle nudge than a specific warning, and some have called for stronger notifications that specifically correct the record on individual false claims. When Recode asked about what the notifications will look like, a spokesperson for Facebook said the design in the blog post is an “early version” and that it’s also testing “more explicit” variations.
“We’ll continue to iterate on these designs with a goal of ensuring people who’ve been exposed to harmful misinformation about Covid-19 are connected with the facts,” said the spokesperson.
For Facebook’s critics, the move toward retroactively notifying users about misleading Covid-19 content, even if incremental, is a welcome development. Many academics and policy experts have scolded Facebook for lax moderation not only on Covid-related topics but also on misinformation around issues like immigration and politics. Historically, Facebook and other social media companies have been reluctant to flag politically contentious information as false, arguing that over-policing content on their platforms could limit free speech. But with the coronavirus, the company has adopted a more aggressive approach.
“[T]he company has taken a key first step in cleaning up the dangerous infodemic surrounding the coronavirus, but it has the power to do so much more to fully protect people from misinformation,” wrote Fadi Quran, campaign director at nonprofit activist group Avaaz, in a statement to Recode. Avaaz is one of several groups that has been pushing for stronger fact-checking and for corrections to be issued more broadly on the platform, not just on content about Covid-19. New research commissioned by the organization shows that Facebook corrections have a major impact in shaping users’ views and can effectively reduce people’s belief in misinformation by 50 percent.
Facebook declined to answer a question from Recode about whether it will apply its warnings to other types of misinformation in the future.
For companies like Facebook, it’s a lot easier to draw a line in the sand on misinformation about coronavirus topics than around more politically contentious ones, like gun rights, abortion, immigration, or even the 2020 US elections. While there’s still plenty of uncertainty about Covid-19 — even alleged biases of authoritative sources like the WHO have come under question — it’s still much easier to prove why a hoax about Covid-19 is wrong than it is to confirm the veracity of a personal attack on a politician.
In the next few weeks, we’ll learn more about what these new retroactive notifications to Facebook users that have been exposed to coronavirus misinformation will look like, not to mention how widely they’re being sent out. But it may take much longer before we see if Facebook’s new moderation strategy will be a lasting mechanism to fight misinformation online more broadly.
Support Vox’s explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.
via Vox – Recode