Solving the Internet's Misinformation Problem
To wrap up Viral Networks, we invited some experts to talk to us about inventions and solutions that can stop the spread of misinformation online. Those range from some user-level solutions from psychologist Gordon Pennycook, to community fact-checking in Africa with David Cheruiyot, to the demand for better platforms and data access from our colleague and digital public infrastructure scholar Ethan Zuckerman.
Guests:
Gordon Pennycook, David Cheruiyot, Ethan Zuckerman
Subscribe directly on Apple Podcasts or Spotify, or via our RSS feed. If you're enjoying the show, please be sure to leave the show a rating on your podcast provider of choice.
To wrap up Viral Networks, we invited some experts to talk to us about inventions and solutions that can stop the spread of misinformation online. Those range from some user-level solutions from psychologist Gordon Pennycook, to Africa Check's community fact-checking with David Cheruiyot, to the demand for better platforms and data accessfrom our colleague and digital public infrastructure scholar Ethan Zuckerman.
Transcript
Ethan Zuckerman
But what it's meant is that we don't trust the data coming out of these companies. And we see these companies actively fighting researchers when we're trying to figure out what are the implications of these tools and what do they mean for individuals and for society. This space is too important, and it's got too many social implications to leave it up to the platforms, to do the research. It's time to have truly independent researchers who can work with this data.
Introduction
You’re listening to Viral Networks, a look at the current state of mis- and disinformation online, with the scholars studying it from the front lines. We’re your hosts, Emily Boardman Ndulue and Fernando Bermejo.
Fernando Bermejo
Welcome back everybody, and welcome to the final episode of Viral Networks. You just heard from Ethan Zuckerman of UMass Amherst.
Emily Boardman Ndulue
We’ve been talking a lot on this podcast about the people who make misinformation, how it spreads, how the platforms are used – but we’ve left one last little piece out of the puzzle. What do we do about it?
Fernando Bermejo
It’s true, we’ve basically spent five episodes outlining a problem – a very complex problem – but it’s not an impossible one to solve. Today, we’ll be talking about a few possible solutions. We’re joined by the psychologist Gordon Pennycook and journalism researcher David Cheruyiot to learn about how misinformation can be mitigated. And also by Ethan Zuckerman, to hear about how the Internet could be reshaped to take power away from the platforms where misinformation currently flourishes.
Emily Boardman Ndulue
Well, there’s actually another puzzle piece we’ve left out too, honestly: the user. At the end of the day, it’s users of these social media platforms, like you Fernando, and me, and all of you listeners, who encounter misinformation. Today’s episode is about solutions, but it’s also about the specific problems users have when encountering information on the Internet today. Gordon is going to kick off our episode today with a fascinating discussion on the psychology of Internet users, and why exactly they’re so vulnerable to misinformation.
Fernando Bermejo
We hope you enjoy the final episode of Viral Networks.
Gordon Pennycook
So I'm Gordon Pennycook. I'm an assistant professor of behavioral science at the Hill/Levene Schools of Business at the University of Regina here in Canada. (00:06)
So I actually got my PhD in experimental, like cognitive psychology, even though I work at a business school for some mostly weirdly historical reasons. So I basically, I look at the problem misinformation from a psychological standpoint and mostly from a cognitive standpoint, like what is it about our minds that lead people to fall for fake news? Why do they spread it? So it's mostly focused on the person instead of the systemic problems and all that.
Emily Boardman Ndulue
Is there anything that you can say around who falls for fake news or misinformation, either who in terms of identity categories, brain ways of approaching problems, is there anything that we know, are we all equally susceptible to that?
Gordon Pennycook
People say that we believe fake news because we want to believe it, because it's consistent with our identity. We found that that's not really the case, actually.
Most of the time, the key thing that leads people to believe fake news is because they haven't thought that much about it. When you're on Facebook, you're just scrolling through, it's mostly pictures of dogs and babies and stuff. And then you see a news headline, and they're not really in the mode of thinking where they are ready to kind of think about that in a critical sort of way. And yeah, it's more likely to be consistent with your ideology, because it's part of their feed, they've curated it.
Emily Boardman Ndulue
And what about the act of sharing, versus the act of reading and believing yourself? What do we know about why people share and why it spreads?
Gordon Pennycook
In some studies we find that twice as many people share a false headline that's consistent with their ideology in one condition than a different condition, believed that the headlines to be true. So like places many people were sharing headlines has believed them. So half the time people were sharing things that they, if you ask them, they would realize that they didn't actually believe. And so there's this big disconnect and that's partly, it's an attentional problem. People can only focus on so many different things when they're making choices. And the thing that is incentivized on social media is not accuracy. You don't get extra points for sharing something that's true, unless you happen to have a very studious group of friends who correct every single time that you share something false, which most of us don't. What you get reinforcement for on social media is like some comments and stuff like that. And unfortunately accuracy is not very related to those things.
Fernando Bermejo
Gordon also explained to us what psychologists call the “illusory truth effect,” where the more you hear a claim repeated, the more likely we are to feel that it is true. This, he told us, is a heuristic that even the most intelligent among us rely on. Repetition influences us.
Gordon Pennycook
And the only way to truly to have confidence ironically is to like, actually you have to stop and reflect about these things, right? And most of our confidence comes from the kind of intuitive part. So there's, this is the kind of the battle we have in between these things where you have to like, we gain confidence from feeling intuitive and having things being repeated and things that seem true to us. And then you have to have that constant like, "But is that really true?" And something has to trigger that. And then it's not easy to train that sort of thing, unfortunately.
Emily Boardman Ndulue
Fortunately, while misinformation might be hard to recognize at first glance, Gordon offered a few simple techniques for thinking critically about the information we encounter online.
Fernando Bermejo
One approach is checking the source of that information.
Gordon Pennycook
The first experiment I ever ran actually on fake news and misinformation was varying whether people saw the source, because my assumption like many people's was that what people probably are doing is just seeing whether this is from the New York Times, or from nowwaitnews.com or some other random source, is that's a really good heuristic, actually.
Like just, like the New York Times, whatever your opinion of New York Times, they don't make shit up. They're not just, they would lose their jobs if they made something up. But the fake news people, that's all they do. And so knowing the source is like a really good heuristic. You don't have to even know what the claim is. You just can basically, if that's from a dubious source, I should probably just not pay attention to it.
Fernando Bermejo
And another would be asking users if they believe what they’re seeing is accurate.
Gordon Pennycook
Other elements of that are, tech companies, or even just vested interests can do things to help people do that themselves.
Since it's an attentional issue, what we found is that simply prompting people to think about accuracy, just giving them reminders or asking them the question about whether something is true, without telling them what to do. We're not telling them to be accurate or to think about it. We're just asking the question that is about accuracy. What you see is that afterwards people are more discerning in what they share on social media.
So I mean, we don't really know what people are thinking. We know what influences their choices and what factors related to the content and the choices are influential. So for example, in one experiment we have both true and false news headlines that we show people. In the control condition we just give them a bunch of these headlines and we ask them what they would consider sharing online. Okay, so it's not actually on Facebook, it's a lab study. A different condition what we do is we ask everybody before they share the headline, "Do you think this is accurate?" Okay, so now we know that they're thinking about accuracy, because we're asking them about it. What you find is, they share half as much false content if you ask about accuracy first. If you force them to think about accuracy, they share half as many false headlines.
Emily Boardman Ndulue
Gordon also flagged that while social media companies may be drawn to attach warnings to certain articles shared on their platforms, those warnings could backfire. Users may start to believe that anything without a warning attached is valid, which would only bolster a piece of misinformation that slips through the cracks.
Fernando Bermejo
It’s why, at the end of the day, one major solution to mis- and disinformation is educating users about some common red flags, which will hopefully empower them to make good judgements about what they’re seeing online.
Gordon Pennycook
We still have to educate people and to help people figure out more accurately what is in fact true or false. You give people the tools to do proper fact-checking themselves.
And we also need good debunks. We need resources for people to find the truth. And so all these things are things that kind of have to work together to make things better.
Emily Boardman Ndulue
There was a running theme in our interview with Gordon: if people want to be more skeptical, they need some help. And fact checking can actually be hugely helpful. Not just because it can flag bad information, but because it can actually happen on the user level.
Fernando Bermejo
That’s right: you might normally think of fact checking as a job that happens at a newspaper, but increasingly groups and organizations of fact checkers are cropping up to monitor social media and flag misinformation.
Emily Boardman Ndulue
Our next guest, David Cheruiyot told us all about the rise of independent fact checking organizations doing crucial work on the grassroots level.
David Cheruiyot
My name is David Cheruiyot, and I am an assistant professor at the University of Groningen in the Netherlands.
I'm a journalism researcher, and I have a special interest in non-journalistic actors or individuals or organizations that are involved in one way or another with production or criticism of the media, or engaging audiences but operate outside the mainstream news organizations, as we know them.
I remember the early days of COVID-19, when there were this COVID-19 denialists. Then all these debates, political debates, not just in the US, by the way, but around the world about masks, about vaccines, and spread of misinformation, disinformation. Remember also at some point in the early months of COVID-19, the WHO announced that we are not just experiencing a pandemic, but an infodemic.
David Cheruiyot
We could say it has increased interest within institutions to find ways, and even research organizations or research institutions, to find ways to combat fake news. Fact checking organizations have become important, maybe more than ever in this pandemic period. (41:51)
Emily Boardman Ndulue
I'd like to talk a bit about, help you define for our listeners about fact checking. If you could just give us a basic what is it, who's doing it, where does it situated in the news production cycle and process?
David Cheruiyot
I think to define fact checking in a very general sense, it's just one way of verifying information that is received and is a part of a production process, traditional production process, of the news. Fact checking is not really something new, because it has been embedded in journalistic practice for a long time.
But I think the interest has shifted towards independent fact checking, and here, independent fact checking means that, or implies that, actors other than the mainstream news organizations or journalists, traditional journalists, as we know them, are involved in the verification of information in one way or another.
For fact checkers, I think they have been able to establish strategies to be able to combat fake news at mostly the dissemination or the distribution stage to be able to, for example, debunk viral information or news about COVID vaccines or any conspiracy theories, and being able to provide accurate and correct information about that particular text. (28:50)
Emily Boardman Ndulue
Let's talk a bit about these organizations. How do they define their goals? How do they actually do their work? Help us understand that? (13:50)
David Cheruiyot
Let me speak about Africa Check, which is an organization, an independent fact checking organization in Africa, and it was created in 2012. It's actually the first independent fact checking organization in Africa. What it basically does is to monitor the news media and to also monitor the digital space, the common platforms, Twitter and Facebook, to look out there for news that is inaccurate, to look out for memes and conspiracy theories or rumors, and then debunk them.
Then what it also does is to provide correct information, because that is important eventually for audiences in different parts of Africa to finally get the accurate information. We were interested in Africa Check because, as I said, my interest has been in non-journalistic actors, because we looked at it as one of the organizations that we refer to as civic tech organizations. That, in one way or another, involves the citizens in being able to function and being able to reach its goal, which is to debunk this inaccuracies or fake news.
One way in which Africa Check does this is to ask the public, through digital platforms like Twitter and Facebook or WhatsApp, to volunteer any kind of report that is inaccurate, or memes or conspiracy theory. Or just to also partner with the media organizations, in some cases, to check any inaccuracies or any memes that have been established for quite a long time, and continuously appear in the content of news organizations.
Emily Boardman Ndulue
I imagine that the universe of things to check is far vaster than the capacity of the organization. How do priorities get set?
David Cheruiyot
It differs from one organization to another. From what I've seen and what we also got from the interviews we did was that what was followed was basically what was being shared a lot in digital spaces.
It could be like a report that had gone viral, a news story that appears, or what is shared a lot on WhatsApp. Like I mentioned that, there's the citizen element, the element in which the citizens are involved in being able to provide information, again, to this independent fact checkers. That helps them make this decision, whether something is spreading.
Emily Boardman Ndulue
David mentioned that these fact checking organizations are a powerful grassroots effort to ensure that regular people aren’t getting misleading information from powerful people.
David Cheruiyot
We speak a lot about what has gone viral and a lot of people speak about that, and it ends up being fact checked. But what has also been done by Africa Check is to follow newsmakers, and newsmakers here in this case could be professionals or politicians. Africa Check, for example, has done this, I think. It has become a practice over the past few years.
There's this state of the nation address that is delivered by the presidents of Kenya and South Africa. They dedicate the whole period of the speech on fact checking, just finding out facts mentioned about, for example, the growth of the economy, mentioned about the infrastructure that has been developed, because governments sometimes tend to exaggerate those things.
What Africa Check has done is to just find out some of those claims that are made, and then counter check them with a variety of sources within the government, or within civil society. That has become very common, and I think also during the COVID period, some of the facts that were mentioned by the health ministers in some African countries were fact checked by Africa Check.
Emily Boardman Ndulue
But David made sure to stress during our interview that the burden for ensuring there’s good information online can’t totally rest on groups of fact checkers. Tech companies have a lot of responsibility too.
David Cheruiyot
What we know now is that this is an area that needs to be explored further. How, for example, the questions about fact checking where's this information obtained, and what kind of data is stored, and what kind of data is disseminated and the collaborations and the partnerships between big tech organizations and this independent fact checking organizations, because this has big implications on transparency, on accountability, and in a very broad sense, democracy.
Fernando Bermejo
Yes, tech companies should absolutely be responsible for what appears on their platforms. So far on today’s episode, we’ve talked about what individual users can do to stay vigilant about encountering misinformation, and how important collective efforts are to combat its spread. But thus far we’ve left out perhaps the most influential entities of all: the companies who run the social media platforms where misinformation is spreading.
Emily Boardman Ndulue
To wrap up today’s episode, and this series, we invited an old colleague of ours Ethan Zuckerman to tell us about how interventions at the platform level are a key part of the solution to the misinformation problem. Yet, as he’ll explain, a major obstacle stands in the way of pushing these platforms to address the misinformation problem: researchers often have very poor access to understanding exactly what is even being shared and discussed on these platforms.
Fernando Bermejo
As Ethan explains, many researchers, including several who we have interviewed on this show, have to develop novel methods for studying misinformation on social media precisely because the tech companies themselves — especially Meta and Google — often deny researchers direct access to data. And it also means they have to do their detective work with precious little evidence.
Ethan Zuckerman
Hi, my name is Ethan Zuckerman. I'm an associate professor at the University of Massachusetts Amherst. I teach communication information and public policy.
So when we talk about holes in the data that we need to research these issues, holes is sort of the wrong word. It implies that there's a tapestry, and that there's a hole in it. Mostly what we have are holes and very small pieces of tapestry in it.
Here's the way that I've come to understand this. There are social scientists all over the world who want to help Facebook and YouTube and Twitter, and everybody else understand what the individual and community effects of social media are, right? There's all sorts of hypotheses out there that social media may be affecting individuals – making you more depressed, making you more vulnerable – that social media may be affecting societies, allowing the spread of mis- and disinformation, making this more polarized, so on and so forth. Many of us started by trying to work with the platforms.
We participated in things like Social Science One where Facebook and the Social Science Research Council worked together to get people access to data within Facebook. It didn't work. After more than 18 months of waiting, Facebook released less data than they had promised to, to a very limited number of people. And that data proved to be tremendously faulty. And in fact, the only way that we found the data was faulty was one of the researchers figured it out and pointed it out. And then Facebook somewhat sheepishly admitted that the data was broken.
Researchers then tried to study the platforms by asking users to donate their data. So these are projects like the NYU Ad Observatory, where more than a thousand users said, “Yes, you can look over my shoulder while I look on Facebook. We're going to collect a collection of ads and we're going to study how Facebook is doing ad targeting for users. Facebook then hit NYU with a cease and desist letter. So now as people are trying to study the platforms on their own, the platforms are trying to take legal action against researchers.
The platforms could have found ways of responsibly sharing this data, but at the end of the day, the platforms are not as interested in understanding the social impacts of their tools as they are in making money off of them. This was what was interesting about the Facebook Files. Facebook has a brilliant team of social scientists in-house. They have one of the very best labs in the world doing computational social science. And that lab has discovered fascinating and important things. It is the company that has then responded and said, it would really hurt us commercially if we followed up on some of these things that we've learned. So it's not the researchers' fault, it's really the lawyers and the profit-making parts of the company. But what it's meant is that we don't trust the data coming out of these companies. And we see these companies actively fighting researchers when we're trying to figure out what are the implications of these tools and what do they mean for individuals and for society.
This space is too important, and it's got too many social implications to leave it up to the platforms, to do the research. It's time to have truly independent researchers who can work with this data.
Fernando Bermejo
And when Ethan says that these companies are actively fighting research, he does not just mean they fight it through legal action. They also fight it by giving researchers the tiniest slice of pie they possibly can.
Ethan Zuckerman
YouTube's formal research API allows me to run 100 searches a day. That's how many I'm allocated as a professional researcher. That's crazy. You might run a hundred searches on YouTube trying to figure out how to fix your kitchen faucet. But we need to find a way to other mandate limits that allow real research to be done o r we need to allow researchers to team together and share access to these things. We need real research access, and we need to protect other ways that people access these sites by scraping them, by simply retrieving public webpages and capturing that information.
Emily Boardman Ndulue
Ethan stresses that independent research would not just help us better understand how, for example, YouTube’s algorithm determines what content gets boosted, but it could actually help elucidate the relationship between extreme content and offline violence.
Ethan Zuckerman
So one of the big hypotheses that comes forward on YouTube is that YouTube pushes people towards extremism. This is called the Rabbit Hole hypothesis, and it's named after a theory put forward by the tech columnist, Kevin Roose, who believes that people who start looking for relatively innocuous content find themselves looking for increasingly extreme content. Researchers who tried to study this have actually found a lot of problems, reproducing the Rabbit Hole effect. Actually the, the, what seems to be coming out of the research is some evidence that people who have strong racist tendencies go to YouTube and find racist content. But that's very different than YouTube converting people towards extremism. YouTube has the data that might help us answer this, right? We might be able to watch a set of users over a period of time and see if they move towards extremes.
Fernando Bermejo
So what would this independent research look like? Ethan suggests building systems that let researchers or even auditors see social media through users’ eyes.
Ethan Zuckerman
Data donation and panel methods might be able to do this if we could follow a hundred thousand YouTube users over the course of a year and see whether they become more extreme, but the other way to do this is you could audit the algorithms. You could have a group of algorithmic auditors who come in with certain tests to try to figure out – Is your algorithm pushing people towards the extremes?
Is it favoring or disfavoring content from certain racial or ethnic groups? Is your advertising unfairly targeting or excluding certain groups? And you could do this with a body that had a fiduciary duty. Accountants have certain duties to the companies that they audit. They're not allowed to like grab that data and go out and sell it. And you could do something similar in this space. You could essentially say, you're going to get to see some of the inside secrets of YouTube, but you're also going to be sworn not to harm the country, the company in revealing those things.
Emily Boardman Ndulue
Of course, these are all solutions for researching social media platforms as they are now. But Ethan imagines that there could actually be a future version of the Internet with a more robust ecosystem of smaller social media platforms that would actually create competition for the most powerful social media platforms.
Fernando Bermejo
In Ethan’s view, one major benefit of creating a competitive ecosystem of smaller social media sites is that users could actually have more of a say in how those platforms are run and, in turn, how content moderation is handled.
Ethan Zuckerman
One of the huge problems with Facebook is that the people who use Facebook have no responsibility for the speech on Facebook. You go on Facebook and if you say something that is offensive and out of line, so on and so forth, it's left up to a set of AI algorithms and some poorly paid people in the Philippines to try to figure out what you, whether you've got to continue speaking or not. That system doesn't work well. We know in particular, it works very, very badly around issues like hate speech, but even beyond that, it's pretty disempowering for those of us involved.
We don't have any responsibility for the speech environment that we end up in. So it'd be really interesting if we could complement, and maybe over the long time, replace some of these social networks with social networks that worked entirely differently. This is a project that I'm calling "digital public infrastructure." The idea behind it is that the infrastructures that support our public sphere are frankly way too important to be left purely to profit-making companies. Simply deciding that, you know, Facebook controls some big chunk of our public sphere is not to me an acceptable answer to the question. We need to have some way of saying these spaces should be designed for healthy conversations, and they should be governed by the people who live within them.
Fernando Bermejo
It’s hard to imagine exactly what an Internet after Facebook would look like, and it is impossible to tell what the landscape for misinformation would be on a more decentralized Internet, but Ethan is pointing out one thing we know for certain: large tech companies like Meta intentionally cultivate a giant user base and problems like viral misinformation seem to follow pretty naturally.
Emily Boardman Ndulue
But it’s true that the design and governance of these spaces online matters a lot. That’s what Joan Donovan told us all the way back in episode one. And as Ethan points out, accountability likely can’t come until both regulators and users have a say in how these huge platforms operate.
It was a true pleasure chatting with all of the gracious guests who joined us, and a thrill finding a way to present this vast scope of information to you, the listeners. Thank you so much for joining us. Signing off, we’re your hosts Emily Boardman Ndule.
Fernando Bermejo
And Fernando Bermejo. Thank you.
Credits
Viral Networks is a production of Media Ecosystems Analysis Group. We’re your hosts Emily Boardman Ndule and Fernando Bermejo. All episodes are produced and edited by Mike Sugarman. Julia Hong joined us as a script writer and provided additional research. Music on this show was composed by Nil and our producer Mike. Funding to produce this series was provided by the Bill and Melinda Gates Foundation. And last but certainly not least, we want to give a big thank you to all of the experts who joined us for interviews on this show.