It’s increasingly well understood that social media users are exhausted by the extreme political content appearing in their feeds. Research from Northwestern University highlights how this fundamental misalignment between what users want and what algorithms provide them has contributed to the spread of misinformation.
“There are reputational components that Twitter and Facebook must face when it comes to elections and the spread of misinformation,” the researchers explain. “The social platforms also stand to benefit from better aligning their algorithms to improve user experience.”
Online learning
Humans have a natural inclination to learn from individuals they perceive as prestigious or influential within their social groups. This predisposition has evolved to facilitate cooperation and survival.
However, in the context of modern, diverse, and intricate communities, particularly in the realm of social media, these inherent learning biases become less effective. Online connections may not always represent trustworthy sources, and individuals can easily feign prestige or influence on digital platforms.
During the early stages of the development of learning biases, morally and emotionally charged information held crucial importance. Such information helped reinforce group norms and contributed to collective survival.
User engagement
Contrastingly, algorithms predominantly prioritize information that enhances user engagement, as this drives advertising revenue. Consequently, these algorithms perpetuate what researchers term Prestigious, Ingroup, Moral, and Emotional (PRIME) information, without necessarily considering the accuracy or representativeness of a group’s opinions.
Consequently, social media feeds become saturated with content that aligns with human biases, often amplifying extreme political content or controversial topics. If users are only exposed to this type of information and lack exposure to diverse perspectives, they may develop a skewed understanding of the prevailing opinions within different groups.
“It’s not that the algorithm is designed to disrupt cooperation,” the researchers say. “It’s just that its goals are different. And in practice, when you put those functions together, you end up with some of these potentially negative effects.”
Making things better
To tackle this issue, the researchers put forth a proposal advocating for greater user awareness regarding the functioning of social media algorithms and the reasons behind the appearance of specific content on their feeds.
While social media companies often keep the inner workings of their algorithms undisclosed, the authors suggest that these companies could begin by providing explanations for why a particular post is being shown to a user. This might involve clarifying whether the content appears due to engagement from the user’s friends or because of its overall popularity.
Furthermore, the researchers recommend that social media companies take proactive measures to modify their algorithms to better foster a sense of community. Instead of exclusively favoring PRIME information, the algorithms could incorporate limitations on the extent to which they amplify such content and, instead, prioritize offering users a diverse range of material. Such alterations would allow for continued exposure to engaging information while curbing the overrepresentation of polarizing or politically extreme content in users’ feeds.
Moreover, the research team is working on developing interventions aimed at educating individuals on how to become more discerning consumers of social media, thereby empowering users to navigate these platforms more consciously.
“As researchers, we understand the tension that companies face when it comes to making these changes and their bottom line. That’s why we think these changes could theoretically maintain engagement while also disallowing the current overrepresentation of PRIME information,” they conclude. “User experience might actually improve by doing some of this.”