The annoyance of such individuals has led many to try and figure out how to stop them. For instance, back in 2013, a service was developed by SMC4 to stop harmful messages being delivered to their intended target by using a smart algorithm to halt them in their tracks.
Users connect up their social media accounts to the service, before then identifying any harmful phrases they wish to have filtered out. Each message sent their way is then put through the filter to test it for indications of abuse, racism, swearing and so on. You can set up different responses for different types of messages, so for instance racism can be handled differently to a complaint.
Of course, that’s quite a blunt tool, and for community managers, being able to identify trolls is invaluable so you can cut them off at source. A recent study, led by researchers at Stanford, might therefore be of interest.
The researchers analyzed the behavior of trolls on a variety of large online communities to help create an algorithm that they believe can identify one in as few as 10 posts.
The authors studied CNN, Breitbart and IGN, each of which has a huge list of members that have been banned over the years for various acts of anti-social behavior.
This list, together with the posting history of each member prior to their ban gives the researchers a large catalog of content by which they can monitor how trolls behave.
The researchers attempted to answer three different questions about each troll:
- Are they always trolls or do they become trolls towards the end of their life on the community?
- Does the reaction of the community to their behavior make it worse?
- Can trolls be accurately identified before becoming trolls
The researchers compared the post history of those who were eventually banned against more well behaved members, with some clear differences emerging.
What separates a troll from a respected member
For instance, the authors measured the readability of posts, using something that they called the Automated Readability Index, and it emerged that trolls tend to contribute poorer content from the off. If that isn’t bad enough however, the quality of their contributions declines over time.
While communities appeared forgiving to begin with, thus giving trolls a stay of execution, they become much less tolerant of them over time.
“This results in an increased rate at which [posts from antisocial users] are deleted,” the authors say.
The data suggests that the difference between trolls and non-trolls is as stark as chalk and cheese, which made the creation of an algorithm to detect them quite straightforward.
“In fact, we only need to observe five to 10 user posts before a classifier is able to make a reliable prediction,” they declare.
With online communities seemingly only ever going to be a larger part of our lives online, being able to detect a troll early on in their community life could be a godsend for community managers the world over.
Suffice to say, with any automated approach there needs to be caution applied to ensure valid members don’t get caught up in your net, but hopefully it’s a service that the researchers will continue to tinker with.
All of which begs the ultimate question of when this might become available to community managers? Unfortunately, that’s a question I don’t have an answer to.