The Algorithm That Can Reliably Spot Cyberbullies On Twitter

The anonymous nature of Twitter makes it a haven for trolls and cyberbullies, and there’s a sense that the social network is fighting a losing battle in keeping pace with the number of instances of cyberbullying on its platform.

New research from Binghamton University highlights a machine learning-based approach that is able to identify bullies on Twitter with an accuracy of 90%.

The researchers managed to dissect the often ambiguous environment of online bullying do understand the behavioral patterns exhibited by abusive Twitter users, and how these are distinct from normal Twitter users.

“We built crawlers — programs that collect data from Twitter via variety of mechanisms,” the researchers explain. “We gathered tweets of Twitter users, their profiles, as well as (social) network-related things, like who they follow and who follows them.”

Bullies in the haystack

The team then performed natural language processing and sentiment analysis on the tweets they had harvested, before conducting an analysis of the social network to dissect the connections between users.

They used algorithms to classify two specific types of bad behavior online: cyberbullying and then cyber aggression.  The algorithms were able to spot both forms of behavior with around 90% accuracy.

“In a nutshell, the algorithms ‘learn’ how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples,” the researchers explain.

While the work marks a considerable amount of progress, the researchers fully accept that it nonetheless marks only the first step of what needs to be a much more significant body of work to help stamp out cyberbullying.

“One of the biggest issues with cyber safety problems is the damage being done is to humans, and is very difficult to ‘undo,'” they conclude. “For example, our research indicates that machine learning can be used to automatically detect users that are cyberbullies, and thus could help Twitter and other social media platforms remove problematic users. However, such a system is ultimately reactive: it does not inherently prevent bullying actions, it just identifies them taking place at scale. And the unfortunate truth is that even if bullying accounts are deleted, even if all their previous attacks are deleted, the victims still saw and were potentially affected by them.”

Facebooktwitterredditpinterestlinkedinmail