President ***** speaks to the press on the White House lawn, May 2020. | Alex Wong/Getty Images
What happens when the medical misinformation comes from the president?
The big tech platforms found themselves in an unusual position during the first part of the coronavirus pandemic: getting praised for how they handled misinformation.
It was nice while it lasted.
As the coronavirus spread around the world in March, Google, Facebook, and Twitter quickly announced that they would ban and take down misleading and dangerous material about Covid-19 on their sites — or at least prevent it from getting much traction.
It was a notable change from their much-criticized performance in the last presidential election. And the tech companies said the lessons they learned from 2016, and the changes, hires, and investments they had made since then, let them move quickly to battle disinformation this time.
Two months later, it seems like the sheer volume of garbage that gets dumped into the platforms may overwhelm their efforts to keep a lid on it. Witness the conspiracist “Plandemic” video, which suggests, among other things, that wearing a mask can make you sick: It moved widely around Facebook and YouTube for weeks before the platforms took it down earlier this month.
But it’s not just the amount of stuff that gets uploaded to the platforms that’s posing a problem for the tech companies, as they try to figure out what to leave up and what to take down.
It’s that the coronavirus is more than a public health crisis. It’s an unending series of political arguments, over everything from the ***** administration’s miserable response to the pandemic to the way government aid should be distributed to disputes over mask-wearing, or not-wearing. The debate about when and how to open up parts of the country is getting more rancorous and partisan.
And it will certainly continue through the US presidential race: On Tuesday, Twitter took its first action against a ***** tweet, by fact-checking his claim that mail-in voting leads to “substantially fraudulent” results. Expect to hear calls for the service to do the same the next time he tweets something misleading about the pandemic — say, promoting hydroxychloroquine as a way to stave off or cure Covid-19.
Which means the platforms, which desperately want to be seen as apolitical, are facing more political fights, where they’ll have to make hard decisions about taking down — or leaving up — controversial content like arguments against social distancing. It won’t be nearly as clear-cut as deciding to pull down fake news stories about the pope endorsing *****.
The platforms, of course, are used to hearing complaints from across the political spectrum: Left-leaning commentators blame the tech companies for creating an environment that invited abuse and helped elect *****. Conservatives argue (with little evidence) that tech companies unfairly censor them (even while they continue to use them and advertise with them, to great effect).
These political divides over how to respond to the pandemic have already forced the platforms to make controversial calls. Last month, for instance, Facebook took down event posts for anti-stay-at home protests in at least three states, on the basis that the events “def[ied] government’s guidance on social distancing.” But in other states with similar rules, Facebook left up similar event posts promoting similar gatherings, which featured closely packed protesters who didn’t wear masks. The takedowns drew immediate criticism from Republicans, including Sens. Ted Cruz and Josh Hawley.
And at the end of April, YouTube removed an hour-long video featuring an interview with two California doctors, who argued that the coronavirus wasn’t nearly as harmful as other diseases, like the common flu — but only after the video, and clips of it, had been widely circulated on social media and had begun to be promoted by personalities on Fox News (and Tesla CEO Elon Musk).
That removal also drew howls from the likes of Tucker Carlson: “Informed debate is exactly what the authorities don’t want,” he told his Fox News audience. “They want unquestioned obedience, so they are cracking down on free expression.”
From the other side of the aisle, Democratic Rep. Adam Schiff in April called on YouTube, Twitter, and Facebook to crack down harder on content on their platforms that spread “medical misinformation” (citing a Recode story) about conspiracy theories, like the one that claims the coronavirus is caused by 5G wireless tech.
When it comes to politicians and government leaders, the platforms have traditionally been resistant to curbing any speech. This year, however, they have shown some willingness to crack down on misinformation.
In March, for instance, Twitter, Facebook, and YouTube all removed posts from Brazilian President Jair Bolsonaro promoting hydroxychloroquine as a coronavirus treatment — at the time, the drug hadn’t been proven to help Covid-19 patients, and since then, a study published in the medical journal The Lancet found that Covid-19 patients taking hydroxychloroquine had a higher risk of death than other patients. Around the same time, Twitter also took down similar messages from Rudy Giuliani, *****’s lawyer.
That kind of restraint isn’t consistent, though. ***** has posted similar commentary on Twitter for weeks — and was at it again last week — and it’s hard to imagine any of the big tech companies taking down anything the president of the United States says at this point.
But never say never. On Tuesday, Twitter waded into uncharted waters when it affixed a “get the facts about mail-in ballots” button to a ***** tweet attacking mail-in voting; that button led to a page that declared *****’s commentary “misleading.” ***** responded predictably, calling the label an attack on “FREE SPEECH.” It’s reasonable to expect the president will try to goad Twitter into fact-checking him again, all while making use of the enormous reach the platform affords him.
So: Is Twitter willing to attach the same kind of “get the facts” warning when ***** says that, say, social distancing rules should be ignored, or that the official death tolls from the Centers for Disease Control and Prevention are inflated?
It depends, says Twitter spokesperson Liz Kelley. Twitter already has a policy that calls for actions against “potentially harmful, misleading information related to COVID-19.” But when I asked her about *****’s most recent tweet promoting hydroxychloroquine, she told me that “we won’t take action on wishes of hope for treatments, rather calls to action that would increase someone’s likelihood of harm, i.e., ‘stop social distancing and go out into the streets.’”
Translation: *****’s tweets promoting an unproven, potentially dangerous drug are probably okay in Twitter’s view, as long as he walks a very fine line. Telling America he’s taking hydroxychloroquine is one thing — telling everyone to take it would be another, even if it’s very close on the continuum.
This isn’t what the tech executives thought they would be dealing with, at least early on. At the start of the pandemic, you could almost hear their relief, as they explained that this time around, things were simpler. This was a medical and scientific problem, so there couldn’t be a political component to what they were doing.
That is: They were simply relying on advice from nonpartisan groups like the World Health Organization and the CDC, and obeying mitigation rules set up by individual governments. If they found stuff that contravened it, they didn’t have to agonize about what to do — they got rid of it.
“Anything that would go against World Health Organization recommendations would be a violation of our policy,” YouTube CEO Susan Wojcicki told CNN in April. “If someone’s spreading something that puts people in imminent risk of physical harm — we take that stuff down,” Facebook CEO Mark Zuckerberg told the same network the same month.
They were also trying to preempt problems by sending their users to what they think are reliable sources of information, whether that’s the CDC or news outlets. Twitter, Facebook, and YouTube all routinely promote sites and videos with information about the pandemic; last month, YouTube introduced a “fact-checking” feature that will surface information boxes when users type in specific queries like “bleach coronavirus.”
But now the platforms are back to struggling with how to moderate politically contentious content, and they’re contorting themselves in familiar ways as they try to explain specific actions.
Facebook, for instance, says the company didn’t take down protest posts at the behest of government agencies. But officials in New Jersey and Nebraska say they did reach out to Facebook before the site took the posts down. (The line between a social platform acting on its own volition to stop speech that violates government regulations and acting to take down speech because it’s been told to by government regulators is a thin but important one, notes Jesse Blumenthal from Stand Together, a think tank funded by the Koch family.)
And YouTube’s on-the-record explanation of why it took down the popular video of Dan Erickson and Artin Massihi, the two doctors questioning California’s lockdown strategies, is hard to parse at best. Here’s Ivy Choi, a YouTube spokesperson, via email:
We quickly remove flagged content that violate our Community Guidelines, including content that explicitly disputes the efficacy of local healthy authority recommended guidance on social distancing that may lead others to act against that guidance. However, content that provides sufficient educational, documentary, scientific or artistic (EDSA) context is allowed — for example, news coverage of this interview with additional context.
In practice, this seems to mean that some elements of the 58-minute interview are acceptable on the platform. Also: news stories that quote unacceptable parts of the interview.
But good luck figuring that out if you’re a casual news consumer who might hear that YouTube is “censor[ing] anything that just doesn’t fit their own agenda,” as Mike Huckabee told Fox & Friends last month. And good luck trying to understand why YouTube, which says it has trained its computers and its army of content moderators to quickly flag disinformation before it gets circulated on the platform, didn’t act on the video even as millions of people saw it and passed it around.
The tech giants aren’t the only ones that misjudged parts of the pandemic, of course. Many news outlets, as I previously wrote, fumbled the coronavirus story in its early weeks and didn’t fully grasp the scope of the problem for some time.
You can chalk up the tech leaders’ mistake to a different kind of misidentification. They thought this was an information problem they could solve with deletes and redirects. But if government officials and politicians are turning even the most basic elements of the pandemic — like the official tally of people who have died to date (a number ***** and his allies are trying to question) — then the platforms can no longer point to government guidance as their primary guide.
As was the case in 2016, this all leaves them trying to rule over unruly global platforms they specifically designed to run on their own — and they don’t have clear rules about how to proceed.
Support Vox’s explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Vox’s work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources — particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.
via Vox – Recode