Fake news has been a hot topic of the past year, but much of the debate has surrounded political news. Whilst that is undoubtedly important, there is arguably a more pressing concern when deliberate misinformation spreads into areas such as healthcare.
This was an issue examined in a recent study from researchers at the University of Southern California, who explored how bot accounts were being used to spread misinformation about electronic cigarettes.
Reliable information
The researchers examined over 2.2 million e-cigarette related tweets over a 5 month period, and the authors believe their work is the first study of its kind to be done. The analysis found that the bots were twice as likely to promote both e-cigarette products and also the notion that they can help people to quit smoking than human users.
“Social bots can pass on health advice that hasn’t been scientifically proven,” the authors say. “The jury is still out on if e-cigarettes are useful smoking cessation tools, but studies have shown that the chemicals in vape juice are harmful. Scientists are still trying to understand if vaping damages the respiratory and cardiovascular system. Bottom line: Online falsehoods can influence offline behavior.”
Whilst the evidence around e-cigarettes is mixed, they are certainly popular, with data suggesting they’re the most commonly used tobacco product among teenagers, with some 59% of adult users also smoking traditional tobacco.
Bot or not
The team distinguished bots from human users by analyzing their behavior, whether in terms of the number of retweets and mentions, their ratio of followers to followees and the kind of content they were posting.
For instance, it emerged that bots were much more likely to use hashtags alongside posts revealing how people had managed to quit smoking with the help of e-cigarette use.
When human users used hashtags however, it was more commonly associated with behaviors, identity and the community that has emerged around vaping.
“Use of these hashtags may serve further internalization of, and social bonding around, vaping-related identities,” the authors say. “These hashtags also suggest discussions of vaping may occur in an echo chamber on Twitter in which ideas and beliefs are amplified by those in the network, normalizing vaping.”
The researchers hope that their findings will help alert public health officials to the way messages are being communicated online. For instance, a study from a few years ago highlighted how important influential nodes are in the spread of information. A report from Tow found that the news media undoubtedly fit that bill, but often do more harm than good by spreading misinformation.
So there’s clearly work to be done, but what is quite clear is that there is an ongoing battle to ensure the information people receive is truthful and reliable, and in that sense the influence of social media is unquestionable, not just around areas such as smoking but across the spectrum.
“There are many unhealthy choices social bots can promote, and our future research will focus on other areas such as tanning beds, supplements, fad diets or sugary drinks,” the team conclude. “People need to be aware of these fake social media accounts, and public health campaigns should be implemented to counteract the most dangerous unhealthy behaviors these bots are encouraging.”