Conspiracy Theories Help To Spread Due To Negativity Biases

Social media has unfortunately proven a fertile breeding ground for conspiracy theories. Research from Stony Brook University suggests that negativity bias helps such nonsense spread on platforms like Twitter.

The researchers analyzed around 4 million tweets from 350,000 users suggesting that the 2020 presidential election was beset by voter fraud. They found that tweets are far more likely to be shared when the message contains strong negative emotions.

Voter fraud

The researchers utilized data from the VoterFraud2020 database, which contains tweets from October 23rd to December 16th 2020. In total, nearly 8 million tweets and over 25 million retweets were collected.

“Conspiracy theories about large-scale voter fraud spread widely and rapidly on Twitter during the 2020 U.S. presidential election, but it is unclear what processes are responsible for their amplification,” the researchers say.

The researchers ran a series of simulations that involved individual users tweeting and retweeting each other according to various levels of cognitive bias. This was then compared with the patterns of retweet behavior in the real world among people who shared conspiracy theories about voter fraud.

“Our results suggest that the spread of voter fraud messages on Twitter was driven by a bias for tweets with more negative emotion, and this has important implications for current debates on how to counter the spread of conspiracy theories and misinformation on social media,” the researchers explain.

Negative content

The research confirmed results from past research showing that negative content tends to get shared more frequently than positive content. Interestingly, however, tweets containing quotes were often more moderate than the original tweet. The researchers believe that people are less likely to amplify negativity when they’re leaving their own comments.

They suggest that their model was so successful in replicating patterns in the real world that it could also be used to simulate various other forms of misinformation. For instance, they argue that policymakers could intervene to slow the rate at which tweets appear in people’s timelines.

Facebooktwitterredditpinterestlinkedinmail