I wrote a few years ago about a fascinating project that aimed to automate the calculation of crowd size in any public event. It’s a fascinating piece of work, but arguably more interesting is the growing body of work that is aiming to better understand how these crowds behave.
A recent study from the University of Southern California attempts to use social networks to gain that insight. The authors suggest that the way we communicate on Twitter could provide an early indication as to whether a protest will turn violent or not.
“Extreme movements can emerge through social networks,” the authors say. “We have seen several examples in recent years, such as the protests in Baltimore and Charlottesville, where people’s perceptions are influenced by the activity in their social networks. People identify others who share their beliefs and interpret this as consensus. In these studies, we show that this can have potentially dangerous consequences.”
Understanding crowds
The researchers built a deep neural network to examine tweets for signs of ‘moralized language’ around the 2015 protests in Baltimore around the death of Freddie Gray as he was taken to jail by police.
They compared the number and frequency of moral tweets with the arrest rates at the protest, which the researchers took as a proxy for violence at the event. They not only found a clear correlation, but it was so clear that the number of tweets containing moral rhetoric doubled on days when protests turned violent.
Social media has been used to coordinate and amplify a number of social movements in recent years, and the researchers wanted to explore whether the language used online can influence the way the crowd behaves. They turned to Moral Foundations Theory to perform an analysis of the language used in tweets about the protest. These values focus on care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and purity/degradation.
Echo chambers
The research found that echo chambers can exacerbate matters as people connect online with those who share their beliefs, whilst distances themselves from those who may differ.
“Social media data help us illuminate real-world social dynamics and test hypotheses in situ. However, as with all observational data, it can be difficult to establish the statistical and experimental control that is necessary for drawing reliable conclusions,” the authors say.
The team overcame this by conducting a number of behavioral studies, each containing a few hundred volunteers, whereby they were asked to agree or disagree with various statements about the use of violence against far-right protesters.
This experiment revealed that the more confident people were that others in their network shared their view, the more likely they were to think the use of violence against their perceived opponents was justified.
Suffice to say, the challenge with these findings is performing the analysis quickly enough to make any insight into the likelihood of a crowd turning violent useful to law enforcement agents. It seems that we are some way off that kind of scenario at the moment, although the findings are interesting nonetheless.