In the battle against fake news, various social networks have taken to placing warning tags against content that had been judged to be false by fact-checking services. While the obvious expectation is that this would encourage people to avoid the content, new research from MIT shows that the opposite may actually occur, and readers were more likely to share other content with their friends, even when it hadn’t been verified.
“Putting a warning on some content is going to make you think, to some extent, that all of the other content without the warning might have been checked and verified,” the researchers say. “There’s no way the fact-checkers can keep up with the stream of misinformation, so even if the warnings do really reduce belief in the tagged stories, you still have a problem, because of the implied truth effect.”
Labeling fake news
The researchers recruited several thousand volunteers and showed them a variety of true and false news headlines via a Facebook-style interface. The false stories had been selected from the Snopes website, with an equal mixture of true and false stories shown to the volunteers.
The participants were asked to explain whether they would consider sharing each of the stories they encountered. The volunteers were divided up into groups, with one group seeing some stories clearly labeled as false, another group seeing some stories labeled false, and others as true, and a control group where no labeling occurred.
The results show that tagging content as fake does indeed make people less inclined to share them, with the share ratio falling from 29.8% to 16.1% after content was labeled. This had the side effect of implying that all non-labeled content was true, however, which saw an increase in the likelihood that other content would be shared.
“We robustly observe this implied-truth effect, where if false content doesn’t have a warning, people believe it more and say they would be more likely to share it,” the researchers explain.
The key seemed to be accompanying labels on false stories with labels on true stories, as this resulted in a fall in the sharing of false stories, regardless of whether they were labeled or not, although it must be said, over 25% of un-labeled false stories were still shared.
“If, in addition to putting warnings on things fact-checkers find to be false, you also put verification panels on things fact-checkers find to be true, then that solves the problem, because there’s no longer any ambiguity,” the researchers say. “If you see a story without a label, you know it simply hasn’t been checked.”
Interestingly, participants didn’t seem to reject warnings if the story supported their political ideology, with the labels still making an impact on their thinking regardless. That’s a promising sign and shows that our reasoning powers aren’t always overcome by partisanship.
The researchers plan to conduct further research on the topic, but hope that their initial findings can provide a degree of direction for social networks in overcoming the plague of fake news.
“I think this has clear policy implications when platforms are thinking about attaching warnings,” they conclude. “They should be very careful to check not just the effect of the warnings on the content with the tag, but also check the effects on all the other content.”