According to Pew Research surveys, a startling 40% of American adults have experienced some form of harassment online. This shocking statistic underlines the important role moderation can play in the online communities that are so popular among us all.
This raises the question of who should perform this role, however. It’s a question pondered in recent research from the Cornell SC Johnson College of Business, which found that both the type of moderator (ie a human or AI) and the type of harassing content all influenced the perceptions people have of the moderation decision and the system itself.
Social moderation
The researchers developed a custom social media website in which members could post pictures of food and comment on the pictures made by others. The site featured a simulation engine, called Truman, which is designed to mimic the behavior of others via preprogrammed bots.
“The Truman platform allows researchers to create a controlled yet realistic social media experience for participants, with social and design versatility to examine a variety of research questions about human behaviors in social media,” the researchers explain. “Truman has been an incredibly useful tool, both for my group and other researchers to develop, implement and test designs and dynamic interventions, while allowing for the collection and observation of people’s behaviors on the site.”
The community members were split into one of six experimental conditions, each of which involved a varying degree and type of content moderation and harassing content.
Subjective judgments
The results suggest that people were more likely to question moderation when done by the AI, with concerns about the accountability of the system and how much they can trust it especially strong. These concerns only appeared to emerge when the content was somewhat ambiguous, however, as when the content was clearly harassing, it didn’t matter whether the moderator was a human or AI.
“It’s interesting to see,” the researchers explain, “that any kind of contextual ambiguity resurfaces inherent biases regarding potential machine errors.”
They argue that our trust in the moderating system and our perceptions of how accountable they are are inherently subjective, but when there is an element of doubt, AI tends to be scrutinized more harshly than human moderators.
What isn’t clear from the research is how users react when humans and AI work together to moderate content, which seems likely given the growing emphasis on moderating online communities and the huge quantity of content those moderating systems have to process.
“Even if AI could effectively moderate content,” the authors conclude, “there is a [need for] human moderators as rules in community are constantly changing, and cultural contexts differ.”