The last few years have seen intense scrutiny of social media platforms such as Facebook and Twitter, both of whom have been accused of facilitating the manipulation of voters during elections by nefarious actors using human and autonomous networks to create and spread misinformation.
A new study from the University of Manchester explores just how powerful these networks have become. The focus of the research was data from a court case in South Korea, in which the National Information Service (NIS) was accused of controlling over 1,000 Twitter accounts in order to manipulate the 2012 presidential election.
When the interactions between the accounts were observed, there was clear evidence of coordination between them. This coordination took the form of the ‘principal agent problem’.
“The campaign organizer (or “principal”) wants things done a certain way. For astroturfing campaigns, this means that the agents should try to appear as if they are part of a legitimate grassroots campaign. The “agents,” however, may lack the motivation to do so and try to cut corners to please the organizers,” the researchers explain.
Taking short cuts
This inherent laziness can often prompt agents to simply copy and paste the same message across multiple accounts, which makes it much easier for people to detect, not least of whom were the researchers in this study.
“In summary,” they explain, “the coordination patterns we looked for are two accounts posting the same tweet within a short time window, and two accounts retweeting the same tweet within a short time window.”
The researchers believe their findings are important because the focus on networks of human agents rather than bots, which is perhaps what people tend to think of when exploring fake news. The data suggests that, in this case at least, the human actors were capable of more coordinated patterns than most bot networks.
Using the Twitter data used in the court case the team were able to identify nearly 1,000 additional accounts that were likely to have been involved in the NIS campaign. This has prompted them to explore previous campaigns to see whether they too were victims of Twitter-based astroturfing.
“We are currently studying around 10 more recent campaigns around the globe to see whether these coordination patterns can also be observed. Preliminary results suggest that this is indeed the case,” the team conclude.
The research is part of a growing canon of work highlighting the use of human or automated accounts to manipulate information during elections on Twitter. What is far less clear is quite what the social network are doing about it.