The rise of fake news and misinformation has been one of the most fascinating trends of recent years. Social networks, including Facebook and Twitter, have been in the crosshairs of those who believe such misinformation has distorted and mislead public opinion. It goes without saying that policing misinformation imposes a cost on those social networks, both in terms of the resources required to remove it, and the lost advertising revenue from the pageviews that are no longer generated.
Do the sums stack up to merit the effort? That was the question posed by a recent study from the University of Southern California. The study suggests that whilst this kind of content can increase page views, and therefore advertising revenues, it can nonetheless have a negative impact on the profitability of the website.
“Our models show that engagement levels fall when users aren’t warned of posts that contain misinformation,” the authors say. “And they don’t just fall; they fall to levels lower than when users are warned.”
Reputational damage
The paper says that clicks fall across the board by over half when platforms do nothing to alert against fake news. This failure to intervene can have a significant impact on user engagement levels as once people doubt the reliability of the platform, they lose trust in it completely.
One of the measures taken by Facebook has seen an icon placed next to shared posts to help users identify the origin of the content. It has received criticism however as it still leaves the user to decide whether the source is credible or not.
The researchers hope that their findings will go some way towards helping networks better optimize fake news warnings. The key is to leverage network structure in order to stop the spread of fake news.
They suggest that platforms might be well served by focusing on the various incentive issues surrounding the creation and monitoring of content. For instance, they might provide incentives such as reputation scores or financial incentives to users who spot and report fake news.
“That might make people think twice,” the authors say. “Engagement might go down, but quality will go up leading to a long-term healthy engagement recovery.”
Too much information
The researchers plan to further explore this topic via a behavioral experiment to demonstrate how we consumer and internalize the information we see on social media. They hope this will give them insight into how the apparent political polazrization and incomplete learning has emerged, and use this to better inform actions to counter it.
“If the theoretical findings that initiated the project are true,” they say, “too much information, surprisingly, leads to incomplete learning. The troublesome phenomena we see is the result of the abundance of information on social media.”
As the decisions of social networks are ultimately driven by commercial factors however, they hope that this initial study will be a good first step in driving change.