Does It Matter If People Don’t Trust Fact Checkers?

Misinformation spreads quickly on social media, and fact-checkers have become a key line of defense. But what happens when people don’t trust the fact-checkers themselves? A new study from MIT’s Sloan School of Management reveals something surprising: even when users are skeptical of fact-checkers, the warning labels these groups provide still help slow the spread of false information.

Most social media platforms work with third-party fact-checkers to flag content that’s misleading or false. Previous research suggests these warnings are generally effective, but there’s a catch: in a politically divided country like the United States, trust in fact-checkers isn’t universal. Conservatives, who are more likely to encounter and share misinformation, also tend to distrust these warnings, raising concerns that the labels might backfire.

Effective strategy

To find out whether the labels still work among people who doubt them, the MIT researchers first measured trust in fact-checkers across different groups. As expected, Republicans were less trusting than Democrats, regardless of the perceived bias of the fact-checkers. This distrust was even stronger among Republicans who were more knowledgeable about news production, scored higher on cognitive tests, or had better web skills. However, digital media literacy seemed to boost trust across the board, regardless of political leaning.

In the second part of the study, the researchers conducted experiments with over 14,000 participants to see how warning labels affected their perceptions of false headlines. Participants were shown a mix of true and false headlines, with some receiving warnings on the misleading ones. The results were clear: while trust in fact-checkers did make the labels more effective, the warnings still reduced belief in and sharing of false content among those who distrusted the fact-checkers.

Even the most skeptical participants, particularly conservatives, were less likely to spread misinformation after seeing a warning label. The researchers found no evidence that these labels backfire. Instead, the labels seemed to encourage caution, perhaps because users wanted to avoid sharing something that might hurt their credibility.

Still paying attention

The gap between distrust in fact-checkers and the impact of their warnings suggests that while people might say they don’t trust the labels, they still pay attention to them. The warnings may prompt a second look at the content, or users might hesitate to share flagged posts to protect their reputation.

For those fighting misinformation, this is good news. While not perfect, warning labels are a valuable tool that works even for people who claim not to trust them.

The researchers recommend that these labels be used alongside other methods, like reducing the visibility of harmful content or removing it altogether. In a world where false information spreads easily, the effectiveness of these labels, even among the skeptical, offers a promising way to slow it down.

Facebooktwitterredditpinterestlinkedinmail