That Twitter is awash with fake accounts is hopefully not a surprise to you. Indeed, it’s a topic that I’ve covered on the blog a number of times. It’s been suggested by some research that as many as 50% of the followers of big brands are in fact fakes, with a number of high profile cases of people or brands being caught out inflating their follower numbers artificially.
The rise in so called crowdturfing has prompted a number of attempts at detecting the fakes from the real users. StatusPeople were one of the first back in 2012, but as the influence of Twitter has grown, so too have the stakes. I wrote about a new tool, called BotOrNot, that was backed by the US military amongst others, and which believed it had become quite proficient at sniffing out fake accounts.
Of course, it should go without saying that the bot makers are proving equally inventive, and it’s quite appropriate that in the month that the Turing Test has apparently been cracked for the first time, a study has emerged highlighting the influential power of Twitter bot accounts.
The researchers believe that the Twitter bots that they created as part of their study would not only infiltrate social groups, but also become rather influential amongst those groups. Through their work, they believe they’ve identified various factors that go into making a bot successful, or not.
Their study began with the creation of 120 fake accounts that they cut loose onto Twitter. Each account was setup with a profile and a few followers to get them going, before then being left to grow according to the rules given them. The accounts would generate content by either retweeting the messages of others, or create their own tweets based upon a set of rules that would pick out frequently used words on a particular topic.
The bot army were also given various activity levels. For instance, some were instructed to post at least once per hour, whilst others merely every two hours. All of the bots were programmed to be dormant (ie to ‘sleep’) between 1opm and 9am Pacific time.
Their final instructions revolved around who they would follow. This strategy had three strands to it. The first would see the bots follow 200 people at random from their Twitter stream. The second strand was more targeted, and saw the bots follow 200 people who regularly posted on a particular topic, which in this case was software development. Finally, they were programmed to follow 200 software developers who were all interconnected on Twitter.
So, how did they get on, I hear you cry? Well, of the 120 bots set out into the ether, 38 of them were picked up by the Twitter police, or just 31% of the total number. Arguably more interesting however, was the success with which the surviving bots had in tricking the human users in their ‘target’ network.
During the course of the experiment, the bot army obtained a smidgeon under 5,000 follows from just under 2,000 different users. Over 20% of the bots managed to obtain in excess of 100 followers, which in itself is more than 46% of human Twitter users manage.
What’s more, the bots also managed to gain a Klout score comparable with a number of academics and researchers in the niche they were in!
The researchers reveal that the key to success was activity. The more active the bots were, the better they fared. What’s more, the ‘original’ tweets that each bot generated seemed to be more effective than the bots who relied more on retweeting others content. Indeed, the researchers suggest the relative incoherence of the tweets may have been of benefit.
“This is possibly because a large fraction of tweets in Twitter are written in an informal, grammatically incoherent style, so that even simple statistical models can produce tweets with quality similar to those posted by humans in Twitter,” they say.
A big part of the research behind BotOrNot was to understand the validity of information posted on Twitter. The implication was that there may be attempts by nefarious sources to create bot armies, not dissimilar to that created by the researchers, and use them to influence the zeitgeist in some way. You could imagine for instance using such a resource to discredit a political opponent or a commercial rival.
It’s estimated that there are 20 million Twitter bots in operation, and if this research is any indication, they are getting smarter all the time. Interesting times for both Twitter, and those of us who are regular users of the site.
Sadly I'm not at all surprised by this. I think people often act without thinking on Twitter, so if it looks like a horse and acts like a horse…
That's incredible. Are we really so dumb?
I think there's just a natural tendency to believe what we read online Rob.
This is quite something. You can imagine this being used for all kinds of propaganda, be it corporate or political. This is especially worrying given your earlier post on social media (ie Twitter) being something of an idiot box where we don't question what we read.
To be honest, I'd be surprised if it isn't already being used for that kind of thing. Would be silly not to wouldn't it?
A bit surprised by the amount of bots, but then again, I shouldn't be. I've noticed an increase in re-shares lately from what are obviously fake accounts, or bots, and it's annoying. But I can also see folks, real folks, retweeting them, giving them "validity", if you will. That's even more depressing…
It's an interesting one for sure. I wonder how obviously fake these accounts were? They seemed to suck in a lot of people.