Headlines have framed decision to eliminate third-party fact-checking as a concession to President-elect Donald Trump. In actuality, the move reflects a hard truth: Social media platforms aren’t well-positioned to combat misinformation—and such efforts can be counterproductive.
That was the central finding of a working paper released last year by Harvard Business School professors Scott Duke Kominers and Jesse M. Shapiro. Kominers and Shapiro say Meta’s decision could increase transparency and free resources to address other forms of content that the platform can police more effectively.
Working Knowledge talked to Kominers, the Sarofim-Rock Professor of Business Administration, and Shapiro, the George Gund Professor of Economics and Business Administration, about how Meta’s decision might affect companies, technology, and society more broadly. The interview has been lightly edited for length and clarity.
What do you think about Meta’s decision?
Jesse Shapiro: There are aspects of content moderation that are very reasonable and my guess is that they will persist. These involve situations where removing information clearly limits the potential for something really bad to happen.
So if I were to go on social media and I say, “I don’t like one of Scott’s papers. If you agree with me, here’s where he lives, so you can go find him and complain to his face.” Obviously, that could go very badly, and taking Scott’s address out of that comment would limit the potential for harm.
But there are other kinds of situations where I think the platforms have been trying to fight a losing battle. Let’s say I go online and say, “I think genetically modified foods are really harmful” even though maybe there’s some scientific consensus that that’s not true. Platforms might try to flag that content and point to third-party scientific sources, but I’m skeptical that doing that will improve outcomes.
And there are a lot of ways in which it can make things worse. Different people have different notions of who’s trustworthy as a source. One person’s objective third party is another person’s “group in cahoots with Big Food or Big Pharma.” By appearing to take sides, the platforms might make some users think they are actively promoting one view or another.
Scott Duke Kominers: Our research highlights the distinction between two different types of information.
There’s information that creates harm by changing or affecting people’s beliefs. But if people don’t trust the moderation process, then you can’t necessarily ensure that your moderation is going to make them believe what you want them to believe.
By contrast, if the actual harm is from the information itself—through doxxing people’s addresses, for example, or hate speech—blocking the information stops the harm immediately. Those direct harms often have more concrete definitions and are easier to detect.
Platforms are much better positioned to detect and respond directly to content that is either directly harmful or enables direct harm. They’re very poorly positioned to deal with content that creates harm through effects on people’s beliefs.
Could X’s moderation approach be potentially more effective?
Shapiro: This is something we want to look into more. But, intuitively, I think there’s something smart about trying to leverage peer-to-peer trust in in designing some of these systems.
Kominers: What the community notes mechanism does is find information that lots of different groups of people agree on, especially ones who often disagree. If you can find something that many different groups agree is true all at once, then it’s more likely that someone in any given group agrees on it as well.
It’s not clear this is the optimal mechanism, but it’s intuitive why this would help a little bit with the trust problem we described [above].
Shapiro: But precisely how effective it is, what might make it work better—these are interesting questions for future research.
Could Meta’s change encourage more harmful content?
Shapiro: I don’t think we know exactly what this will mean on the ground, but one of the more prominent parts of this announcement focused on pockets of content where the company had decided to invest in figuring out what was true and then put up flags and point people to correct information. Things like election misinformation and vaccine misinformation are pretty different from hate speech, inciting violence, and facilitating criminal activity, which I imagine that platforms will continue to contend with in one way or another.
Kominers: Indeed, it’s hard to know for sure, but I would hope that this frees up more resources and focus to police directly harmful content.
Is there a role for artificial intelligence?
Kominers: Absolutely, and this is how a lot of early-stage content filtration systems work. They use algorithms to try and identify things that are likely to be harmful, and then propagate them to human moderators (or even filter them immediately and just use humans as a backstop against errors). The wave of new, more human-interpretable AI models—as well as AI models that can do abstraction, especially around text and images—gives them more power to try to quickly identify and respond to harmful content.
Shapiro: I agree with all of that. The other side of the coin, though, is that AI also enables more creative ways to get around some of these filters. If you want to sell drugs on an online marketplace—and you’re not supposed to—there may be ways to use AI to disguise images. There’s a bit of a cat and mouse game there, just like we saw in earlier times with say, spam filters. Some of the time, the filters are winning, and some of the time, it feels like the spam is winning.
Will brands change how they manage online discourse?
Shapiro: The move away from fact-checking shows that companies can’t count on social media platforms to be central arbiters of the truth. And companies need to recognize that there will be different views of what’s true, and that’s the environment we need to live with. Hoping that some regulator or tech platform will come in and sweep all that away is a vain hope. That’s not going to happen.
Kominers: Companies have to understand that policing people’s views at the platform level is an unsolvable problem. And that puts more weight and importance on the company’s own messaging and brand management.
Will there be any ripple effects for society?
Kominers: It’s hard to know what the long-run equilibrium is, but in these types of contexts where there are wide ranges in what people believe coming in, and whether they trust the platform, there was already a lot of misinformation floating around, and, as our research shows, there wasn’t really a way to stop it. The platform’s attempts to stop it, if anything, could have been making it worse.
One could even imagine hypothetically that the impact of the change could go in the other direction: Once people know that the platform is not trying to fact-check, they invest more time and energy in figuring out how to curate their networks and how to interpret and process the information they’re seeing from them. I’m not saying that’s what’s going to happen, to be clear—just that even the direction of the long-run impact isn’t necessarily apparent.
Shapiro: I would just add that some of the tensions in society are precisely about who knows what’s best. Creating the impression that there’s a bunch of very rich, powerful people in Silicon Valley who work with nonprofits to tell you what’s true, and if you say something they don’t agree with, they put a flag in front of it or a big, red symbol around it saying that it’s baloney—that can easily be counterproductive.
What does this say about social media companies?
Shapiro: Meta started off as a social site for college students. It evolved into a place where everybody could share pictures of how much fun they’re having on their birthday. And then somewhat by accident it became a very central part of where people get news and political information.
Part of what we’ve seen as the company has moved from one policy to another over time is trying to figure out how you operate as a private actor in that role. I think this announcement is this company and, to some extent, the broader industry realizing that there’s a role here that other people would like them to take—the role of policing truth on the internet—which they simply cannot do. There was a lot of pressure for them to step into that role for a long time.
Kominers: These platforms have become a key part of global information infrastructure, and once you are global information infrastructure, you’re being exposed to all types of people with all types of beliefs. And so it makes establishing trust incredibly difficult and, conversely, means that transparency and clarity are really valuable for at least helping people understand what they’re getting out of the system.
This is partially toward my bias of thinking a lot about decentralized platforms and ecosystems, but I think there’s value to the idea of social media as protocols rather than platforms. There’s a new wave of social media platforms that are trying to experiment with storing all of the information in an open, publicly accessible protocol that lets people develop their own feed algorithms and content filtration and aggregation systems, to give many different windows into the same information.
Shapiro: Maybe it’s a somewhat imperfect analogy, but if we think about the days of landlines or even my cell phone today, nobody thought it was a good idea for to try to come into the middle of your phone call and say, “Actually, what you just said is not true. You said this thing about the US economy or public policy or public health that’s false.” Obviously, it wasn’t feasible, technologically. But it’s also not an effective way to organize information in a society.
Where do you plan to take this research?
Shapiro: One concrete thing is to try to understand whether things like community notes, these more peer-to-peer methods of evaluating claims, are likely to be effective. And how could they be more effective? Under what conditions are they likely to be most useful?
Also probing some of the fundamental limits: How much of this issue is technology that we can make better, maybe with AI, versus how much is deeper and, no matter what technologies we have, we won’t really be able to solve it.
Kominers: There’s also a question of how individual-level authentication and certification can interact with this. So in a world where we’re worried about deep-fake videos becoming very easy to produce, I’ve been very interested in questions like: How can we develop technologies that allow an individual to certify content that they’ve created?
It’s sort of like providing a micro source of truth that’s relevant for interpreting a particular piece of information. So rather than trying to find the top-down global solution to what is true, use technology to inject critical pieces of verifiably true information along the way.
Disclosures: Kominers is a Research Partner at a16z crypto, which is an investor in various online platforms, including social media platforms (for general a16z disclosures, see https://www.a16z.com/disclosures/). Kominers also advises a number of companies on marketplace and incentive design, including koodos and Quora. Shapiro maintains disclosure information at https://scholar.harvard.edu/shapiro/files/coi.txt.
Image by Ariana Cohen-Halberstam with assets from AdobeStock.