Ordinarily, when we think of online misinformation, we think of Russian agencies deliberately spreading falsehoods to mislead people. Misinformation doesn’t always have such criminal origins, however, and could be produced by genuine people.
New research from MIT explores the best way for people and organizations to respond when they encounter misinformation online. The authors suggest that polite attempts to correct misinformation on Twitter can have the adverse effect of encouraging more toxicity from the individuals concerned, while further entrenching their false views into the bargain.
The researchers conducted a number of experiments on Twitter whereby they provided various polite corrections along with reliable links to solid evidence whenever they found tweets about politics that were flagrantly false.
“What we found was not encouraging,” the researchers say. “After a user was corrected … they retweeted news that was significantly lower in quality and higher in partisan slant, and their retweets contained more toxic language.”
Making corrections
The researchers targeted 2,000 Twitter accounts, each of which covered a mix of political persuasions and which had tweeted out one of 11 different false news articles that had received widespread attention. Each of the articles had been debunked by Snopes.
The researchers then created a number of bot accounts that were allowed to go about their business for a few months, gaining around 1,000 followers in the process in order to appear as legitimate as possible.
The accounts then set about correcting any of the false claims with replies saying that they weren’t sure it was right and with a link to the Snopes article about the false story.
Changing course
Did the corrections work? Not really. The experiment revealed that the accuracy of news sources tweeted by those users fell by around 1% in the 24 hours after they’d been corrected. What’s more, the partisanship of their tweets increased by a similar percentage, with the toxicity of their tweets rising even more.
In other words, far from encouraging people to pursue a more correct path, the corrections instead seemed to encourage people to double down on the ignorant path they had been following before. Interestingly, however, this only seems to apply to retweets, which the researchers believe is because we don’t spend as long on retweets (which suggests we seldom read what we share) as we do when crafting original messages.
“We might have expected that being corrected would shift one’s attention to accuracy,” they explain. “But instead, it seems that getting publicly corrected by another user shifted people’s attention away from accuracy — perhaps to other social factors such as embarrassment.”
Public shaming
Interestingly, these effects appeared to be even larger when people were being corrected by accounts that aligned themselves with them politically, which the researchers believe suggests that the negative response is not driven by partisan animosity.
It casts doubt on previous assertions that a neutral and nonconfrontational reminder about the reliability of various news sources can help to increase the accuracy and reliability of content shared on social media.
“The difference between these results and our prior work on subtle accuracy nudges highlights how complicated the relevant psychology is,” the researchers explain.
The answer might be to provide the corrections in private rather than a more public shaming about the accuracy of one’s content, which is perhaps more likely to encourage us to double down in order to save face.
As we strive to ensure that social media is a platform for good in terms of sharing reliable and accurate information, it’s a topic that is going to attract considerable additional attention, which will hopefully help to ensure that any attempts to make things better don’t actually make things worse.
“Future work should explore how to word corrections in order to maximize their impact, and how the source of the correction affects its impact,” the researchers conclude.