arlier this week, Facebook, Google, Microsoft’s LinkedIn, Reddit, Twitter and YouTube sent out a joint statement saying they would be working closely together to combat misinformation about COVID-19 while also working to build community. But social media companies have said they were fighting misinformation before without a lot of meaningful impact.
I dig into this in “Quality Assurance,” the segment where I take a deeper look at a big tech story. I spoke with Shira Ovide, who covers technology at The New York Times, and I asked her what those companies are doing differently. The following is an edited transcript.
Shira Ovide: I think what some of the big tech companies have said is that they’re collaborating on best practices to help spread good information and tamp down on bad information. What that means, practically, is that you’re seeing on places like YouTube and at the top of your Facebook or Instagram feed information that those companies are pushing you from the World Health Organization or credible news organizations. What we’re seeing here is basically a wholesale rewriting of internet governance by these large companies on the fly, because of this crisis.
Molly Wood: Doesn’t this raise many of the same issues that have always been raised? Can we trust these platforms to identify this stuff and get it right?
Ovide: Absolutely. I think all those same anxieties about the power of these tech companies is still there. What is the right balance between free speech and free spread of good information, and tamping down on bad information? These companies don’t always make good decisions, they don’t enforce their own rules effectively, or they do so inconsistently. All those same problems are there, it just feels muted by the urgency of this public health crisis.
Wood: It seems like it is often the case that in an emergency, power gets flexed in ways that are not undone after the fact. I just wonder, will there be a taking advantage? Will there be a reset in some way of the roles and the goals if they flex all the power at once?
Ovide: All the problems of the big internet spaces, they’re all still there. You’re right, this could be a situation where the internet companies become more empowered, more emboldened, as much as they have before to patrol what happens, to collect more information about people, particularly if there’s pressure on them to collect information for health-tracing things. It just feels like it’s not the most urgent question right now for these tech companies.
Wood: I want to clarify: Do you think it’s not the most urgent question internally, externally or both?
Wood: If they are successful, is it going to show that they could have been successful at this all along?
Ovide: In some way, a public health crisis, a global pandemic is much more black and white than all these other issues that the companies deal with on a day-to-day basis. Maybe the rules that they’re writing for something that is so urgent — endangering the health of many millions of people — maybe that doesn’t apply to all the day-to-day stuff that these companies deal with. They’re the kinds of internet communication that happens on a regular basis, but we’ll see.
Wood: If it succeeds, will there be any going back in terms of content moderation?
Ovide: I think there will be [a] going back, because again, the black-and-white issues that a pandemic presents are different than “Do we allow Trump to tweet things that seem vaguely threatening?” or “What do we do about misinformation that’s spreading in India?” Those are things that are either less black and white or maybe not at the top of the priority pile for these big companies. This coronavirus crisis is different, because it’s both extremely urgent — everybody is making this their top priority — and it’s so cut and dried, the choices they have to make.
Wood: Is it that cut and dried, though? What is Twitter going to do about President Trump, who has sincerely tweeted misinformation? At a time when there is not that much known about how many cases there are in the United States, or about how the virus behaves, or about whether it infects children? There is a lot of speculation and theorizing, in some cases from expert sources, in some cases, maybe not. It feels to me like it’s still pretty hard.
Ovide: You’re right. It’s a good point. I think there are still going to be edge cases. I’m also particularly worried about misinformation spreading in smaller groups. We’ve seen things like Facebook groups and WhatsApp, which are private messages or private postings. Those are places where, even pre-coronavirus, dangerous misinformation started to spread, and it’s much harder for those big internet companies to police that stuff. You’re right, I think there are going to be hard calls, and I think part of the issue is going to be that people are paying close attention, but I think it just feels so urgent for the companies to do something that they’re going to get a lot of slack. Again, for good or for ill, the power that we worry about these big tech companies having, now we want them to flex that power.
On Wednesday, Twitter said it would ban and remove tweets that were spreading misinformation. In a way, that could cause the new coronavirus itself to spread faster. Twitter took down tweets by some prominent people, including some that encouraged people to take to the streets and encouraged businesses to defy orders to close down or shelter in place.
Also, researchers in Italy who usually track misinformation and bots online analyzed more than 121 million tweets based on their emotional content, as well as misinformation, and found that “the amount of unreliable news is a huge potential threat to public health.” The World Health Organization has labeled it as infodemic, but experts said it’s prompting an opposite and hopefully equal response from fact-checkers. Part of the problem, they say, is a lack of good information to counter the bad. The researchers also found that “the whole world is sad,” which, I can’t lie, broke my heart a little.
On the Facebook side of things, CEO Mark Zuckerberg has started doing live streams with health experts like Dr. Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases, and America’s doctor right now. CNN has a piece pointing out what Shira Ovide also noted: Facebook’s biggest misinformation problem is WhatsApp and news that spreads quickly in closed groups.
We’ve been talking this week about the internet, its necessity and how it’s holding up. Despite solid performance in the early days, listeners report that some cracks are starting to show. Europe has asked Netflix and YouTube to help ease the burden by reducing video quality for 30 days to standard definition from high definition. Netflix has agreed,
Now for some sheer nerdery. Intel is releasing a new research system for something called neuromorphic computing — computer architecture that mimics the way the human brain works — which can train artificial intelligence systems on far less data so they’re way faster at learning and use way less energy. Hopefully this is all without the downsides of the human brain, such as getting super distracted by cat pictures all the time, believing everything you read on the internet and making up stories about how your friend hates you because they replied to your text with one “k.”
WFH do’s and don’ts
Shawn O’Shea on Twitter says do make sure PowerPoint is in presentation mode for remote sharing, presumably because otherwise the slides are just tiny, you spoil your big reveals and everybody can see your notes. Might as well just send an email. Do set yourself a timer to remind yourself to stop working — without the forcing function of a commute, a lot of you are just hanging out at your desks until well into the night. Maybe you can download an audio file of that old “Flintstones” quittin’ time bell.
Finally, please go into the weekend with this one piece of advice from me to you. Stay safe. Stay healthy. Check in on your loved ones. Do the right thing for your community. Hug your kids and critters. Don’t forget to mute your mics.