Misinformation shot back into the spotlight (if it had ever really been away) with news that Facebook would be removing content moderation from the platform, preferring instead to mirror X’s community notes approach.
How effective is such community-based moderation? That was the subject of a recent study from the University of Cambridge’s Leverhulme Centre for the Future of Intelligence, which explored how moderation works at Reddit and whether it’s effective in stopping the spread of misinformation.
Community moderation
Moderation on Reddit has famously always been community based, with a laissez-faire approach to what is and isn’t allowed, and the community voting up (or down) the content they liked. Indeed, as Columbia Business School’s Michael Morris highlights in his recent book Tribal, there was a huge backlash when former CEO Ellen Pao was hired with a remit of doing things differently.
She began by implementing an anti-harassment drive, before she then shut down a number of subreddits as they were believed to act as a platform for the harassment of individuals. These moves were strongly opposed by the free-speech-loving community members, who vociferously hounded her until she was forced to resign and the stricken subreddits restored.
The researchers accessed around 2 billion comments and honed in on 9 million or so that were related to climate change. This initial dataset was further refined to identify comments associated with low-credibility domains, based on IffyNews credibility ratings, resulting in a final subset of 23,300 posts.
The researchers then analyzed the impact of the karma system by comparing the karma scores of credible comments with low-credibility comments. They also analyzed different subreddits according to the intensity of moderating in each community, with, for instance, r/science more heavily moderated than r/TooAfraidToAsk.
Peer norms
The results show the power of the karma system to effectively regulate content. The analysis found that users effectively down-voted poor-quality content, which eventually led to lower visibility across the site. It was an approach that was especially effective between 2016 and 2019, although it waned somewhat after 2020, which the researchers believe could reflect changes in focus from the public, along with different policies on the platform itself.
“Reddit users systematically penalize content deemed as unreliable by the community, indicating that the karma system is serving as a de facto moderation mechanism,” the researchers explain.
It was also evident that community norms were hugely influential. For instance, subreddits, such as r/science and r/technology, were extremely strict on standards and would regularly downvote low-quality content. This helped to reinforce the community norms of evidence-based discussion.
“Highly institutionalized communities such as r/science and r/technology show a significantly higher degree of community-based moderation… indicating the importance of epistemic norms,” the researchers explain.
By contrast, in r/TooAfraidToAsk, such norms didn’t really exist, and instead, there was more of an inclination to upvote low-credibility and controversial content.
Tidying up
The researchers also explored whether there was any link between the downvotes content received and their removal from the site. They found a significant overlap between downvoted content and moderator interventions in the earlier years of the study, but this diminished over time.
This prompts the researchers to ponder whether the effectiveness of community moderation was also waning.
“The initial period of strong community-based moderation from 2016-2019 saw a decline in karma score differences between 2020 and 2022,” the researchers explain.
The Reddit experiment underscores the potential and limitations of decentralized moderation. It suggests that while the wisdom of crowds can complement centralized efforts, its impact is neither uniform nor guaranteed. Policymakers and platform architects might glean valuable lessons here—chiefly, the need to align user incentives with broader governance objectives to sustain effective moderation over time.
In an era where misinformation threatens societal cohesion, this study offers a cautiously optimistic view of community-driven solutions. Yet, as the authors caution, the nuances of digital ecosystems demand further exploration to harness their full potential.





