Regular internet users will almost certainly have encountered instances of a classic “internet pile on”, where the community heaps opprobrium on the deserving (or undeserving) victim. Research from the Kellogg School of Management explores why people join such events.
The researchers looked into why people often dish out punishment without considering other viewpoints, something they call “punishment without looking.”
Joining a pile on
They wanted to know how common this behavior is and if it happens because seeking other perspectives is too much effort or if skipping it is seen as a strength of our moral stance.
The researchers discovered that, overall, there’s no social reward for not seeking different views. This might be good news for those who want a world with less quick judgment. But there’s a catch—there is a social reward for dishing out punishment, and some folks doing the punishing don’t bother with considering alternative perspectives.
To dig into the idea of punishment without considering different views, the researchers set up an online game with two roles. In one group, participants were “actors.” They read a public petition that aligned with their political views (like Democrats reading a petition about firing a police chief for offensive comments on the Black Lives Matter movement). Then, they could decide to sign it. Before making the call, actors had the chance to explore opposing views by reading articles or searching for evidence supporting the other side.
The second group, sharing the actor’s political stance, took on the role of “evaluators.” They observed some of the actor’s actions and then decided whether to reward them with a small amount of money. Evaluators also rated actors on fairness, competence, loyalty to the cause, and overall positive impression using a scale from 0 to 100.
Doling out punishments
The researchers discovered that evaluators rewarded actors who decided to hand out punishment. Actors who chose to sign the petition were financially rewarded for their decision.
However, importantly, they didn’t find any proof that evaluators favored actors who didn’t bother looking at opposing views. Even though evaluators saw non-looking actors as more loyal to the cause, this was balanced by a bigger drop in perceptions of the actor’s fairness and competence.
In the end, actors who checked out opposing evidence before signing the petition received more generous rewards from evaluators than those who didn’t bother looking.
The crowd we’re in
In the last two tests, the researchers looked at the actors themselves. They wanted to know if the actors’ choice to punish or gather more evidence depended on whether someone was watching them. They also checked if the actors’ decisions were affected by the type of person evaluating them.
In these tests, the researchers changed how much of the actors’ actions could be seen by the evaluators. For some actors, both punishing and investigating could be seen; for others, only punishing could be seen, and for some, nothing could be seen. The actors were also told if the evaluators had strong or moderate beliefs.
Because people care about their image, you’d expect actors to punish more if they think they’re being judged by someone with strong beliefs.
And that’s what the researchers found. If Democratic actors knew a highly ideological evaluator was watching, they were more likely to support the petition (30 percent) compared to when no one was watching (19 percent). Similar, but weaker, results were seen with less ideological evaluators. Republican actors showed the same trend, being more likely to punish when they knew they were being observed by fellow Republicans.
Making things better
Now that we have a better grasp of why we quickly jump to conclusions online, is there a way to break this cycle of online outrage?
One suggestion is to put more focus on considering one’s reputation. Picture a scenario where, when you sign a petition, you have to explain why you’re supporting it.
This may sound a bit out there, but there have already been attempts to encourage more thoughtful actions on online platforms. For instance, on Twitter (now called X), users would get a quick reminder before sharing an article they hadn’t accessed through the platform: “Hey, did you read the article? Do you still want to post it?”
Implementing a system that makes a user’s thinking process or fact-checking more visible to others won’t be simple. But if we manage to do it, there’s a chance it could make a positive difference.