The Manipulation Of Social Media Metadata

With the rise of Russian bots, most people are aware of the potential for social media to be manipulated by malicious actors.  Most of the time this involves creating fake accounts to try and flavor the conversation.  As a recent report from Data & Society highlights however, it can also involve the manipulation of social media metadata.

Metadata can perhaps best be described is a suitable name to represent aggregated data. These names allow data to be assembled, classified and organized into defined structures, with decisions then possible as a result of the way this aggregation takes place.

So how can manipulating such metadata distort things? It fundamentally alters the way we do all of the above, so in a social media context, it can materially alter the way things appear, especially in terms of things like the popularity of individuals or content.

The practice of such distortion is referred to in the report as ‘data craft’, and involves practices that “create, rely on, or even play with the proliferation of data on social media by engaging with new computational and algorithmic mechanisms of organization and classification.”

Platform activity

Bad actors attempt to manipulate what are referred to as ‘platform activity signals’, including usernames, follower counts and post dates.  The paper highlights three distinct case studies where this took place on Instagram, Twitter and Facebook.  If this threat is to be tackled, it’s important that it’s first understood, and the report does a fine job of introducing the reader to the issue.  This is especially concerning as so many content moderation systems are automated, and these systems can often miss the data craft that facilitates disinformation campaigns.

The paper provides a number of tips to help platforms identify data craft in action.  For instance, you can try and spot legitimacy by testing if usernames and profile information is consistent across platforms.

Platforms may also look at a number of factors when trying to identify imposter accounts, whether it’s the date the account was created, the number of posts made since then, an exploration of the history of the account to see if it’s ever been dormant or suspended in the past, and to test the followers of that account to see if they’re authentic.

“Politically motivated manipulators understand the representational problems with naming data from platform activity signals because their techniques rely on creating a gap between accurate representation of legitimate platform activity signals and falsified ones,” the paper concludes. “Working within this gap is how manipulators are getting craftier and more agile at avoiding automated moderation techniques.”

Just as researchers, policy makers and platform administrators have expanded their knowledge of the various disinformation tactics used by bad actors in other domains, it’s vital that they also explore the various metadata-based methods used to manipulate online content and discussions. This paper provides a good introduction to help them do just that.

Related

Facebooktwitterredditpinterestlinkedinmail