With the polarization of public discourse a hot topic in recent years, many conservative commentators have accused social media platforms of an undue liberal bias, especially as various conservative figures have been suspended from these sites for spreading misinformation or inciting hatred.
Research from Indiana University shows, however, that it’s not something inherent in these platforms that leans in a liberal direction. Indeed, they find that any political biases evident on Twitter tend to favor more conservative content. What’s more, the content we see on social media is heavily dependent on the contacts we make on the platforms.
“Our main finding is that the information Twitter users see in their news feed depends on the political leaning of their earliest connections,” the researchers say. “We found no evidence of intentional interference by the platform. Instead, bias can be explained by the use, and abuse, of the platform by its users.”
Bias online
In an attempt to better understand the biases we experience online, the researchers sent out 15 bots, each of which deployed more neutral behavior than many of the other social bots loose on Twitter. They were largely programmed to mimic human behavior, with algorithms activating them at random times to perform certain actions.
Each bot began its journey with a single “friend” that was selected from a popular news source that was aligned as either left, center-left, center, center-right, or right on the political spectrum in the United States. From this start point, the bot was then cut loose, with the researchers collecting data from them on a daily basis.
After five months, they analyzed what had been collected to understand what kind of content was being generated and the kind of social network the bots had collected, as well as their exposure to low-credibility sources of news and information.
The analysis revealed that the initial friend had a huge impact on the ultimate direction taken by the bots, both in terms of their social network and their exposure to poor-quality information.
“Early choices about which sources to follow impact the experiences of social media users,” the researchers explain.
Drawn to the right
Interestingly, the bots seemed to be drawn to the right of the political spectrum. For instance, while bots who began with right-wing friends became embedded in largely homogenous networks, which included spreading right-wing, and often low-credibility content, they would also engage more with other automated accounts.
That this occurred even though the bots were designed to be neutral reflects the biases in the online information ecosystem created by the interactions between users.
“Online influence is affected by the echo-chamber characteristics of the social network,” the researchers say. “Drifters following more partisan news sources received more politically aligned followers, becoming embedded in denser echo chambers.”
The authors argue that for us to avoid succumbing to such echo chambers, we should not only moderate the content we consume but also the social ties we establish.
“We hope this study increases awareness among social media users about the implicit biases of their online connections and their vulnerabilities to being exposed to selective information, or worse, such as influence campaigns, manipulation, misinformation, and polarization,” they conclude. “How to design mechanisms capable of mitigating biases in online information ecosystems is a key question that remains open for debate.”