The Role Twitter Bots Play In Spreading Misinformation

The emergence of artificial Twitter accounts that seek to manipulate public discourse has been one of the more fascinating stories of the past year.  As the presence of these fake accounts has become more known, attention has grown to understanding the impact they have.  The latest effort in this regard comes via a study from Indiana University, which explored the role played by bot accounts during the 2016 U.S presidential election.

The researchers analyzed around 14 million messages, which contained links to 400,000 articles shared on Twitter between May 2016 and March 2017.  The analysis revealed that it only required 6% of Twitter accounts to be fake for 31% of the information rated low in credibility to spread throughout the network.  What’s more, their role increases as content gets closer to the ‘tipping point’ where it becomes viral.

The very fact that this is a small window of time, typically measured in a handful of seconds, highlights the challenges involved in tackling the spread of misinformation.  The authors believe that the phenomenon replicates domains such as the stock market, where high-frequency trading can rapidly change the dynamics of the market.

“This study finds that bots significantly contribute to the spread of misinformation online — as well as shows how quickly these messages can spread,” they explain.

An outsized impact

What’s more, the bots were found to amplify a message’s volume and visibility, until such time as it’s likely to go viral and be shared widely, and that they were able to do this despite representing a relatively small number of the overall accounts.

“People tend to put greater trust in messages that appear to originate from many people,” the authors explain. “Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”

The data also uncovered some interesting tactics deployed by bot accounts, including amplifying a single tweet, which would usually be posted by a human operative, with hundreds of automated retweets.  For instance, the team found a single account that mentioned Donald Trump’s Twitter account in 19 separate messages, all about huge numbers of illegal immigrants casting votes in the election (which was of course, a lie).

This targeting of influential accounts was also identified in a second study, from the University of Southern California.   When analyzing elections in Spain, the researchers found that bot accounts weren’t just performing randomly, but rather they were targeting specific human influencers to try and polarize the debate and exacerbate social conflict. What’s more, the humans largely had no idea that they were being targeted.

Fighting the bots

A ray of light emerged from the Indiana study however, which found that it only takes the removal of around 10% of bot accounts is often enough to result in a significant reduction in the number of low-credibility stories floating around the network.

“This experiment suggests that the elimination of bots from social networks would significantly reduce the amount of misinformation on these networks,” the team explain.

The research also highlights some steps that social network platforms can take to slow the spread of fake news, including better bot detection algorithms and increasing the requirement for there to be a ‘human in the loop’ to prevent automated messaging in the system.

As part of their work, the team have produced a new tool that’s designed to measure the volume of bot activity in any election campaign, as well as identify some of the accounts engaging in this activity.  It’s worth checking out if this is a topic that interests you.

Facebooktwitterredditpinterestlinkedinmail