Using AI To Spot Bias In The News

The last few years have seen growing concerns about the reliability of the information we consume. While much of this has centered around the huge rise in misinformation, there are also concerns around bias in the news. Research from McGill University explores how AI can help to detect bias in the media.

The researchers developed a program to use prompts from headlines from the Canadian Broadcast Corporation (CBC) to create news coverage around Covid. They then compared the coverage the program created with actual coverage. The results show that CBC coverage tended to be more focused on geopolitics and personalities than on the medical emergency itself.

“Reporting on real-world events requires complex choices, including decisions about which events and players take center stage,” the researchers say. “By comparing what was reported with what could have been reported, our study provides perspective on the editorial choices made by news agencies.”

Impartial media

The researchers believe that this is vital to understand given the key role the media’s framing of events plays in shaping public opinion and even government policy.

“The AI saw COVID-19 primarily as a health emergency and interpreted the events in more bio-medical terms, whereas the CBC coverage tended to focus on person- rather than disease-centered reporting,” they explain. “The CBC coverage was also more positive than expected given that it was a major health crisis—producing a sort of rally around the flag effect. This positivity works to downplay public fear.”

The researchers explain that while many researchers have attempted to understand the inherent biases within AI itself, they wanted to flip things around and explore how AI can help us to understand our own biases.

“We’re not suggesting that the AI itself is unbiased. But rather than eliminating bias, as many researchers try to do, we want to understand how and why the bias comes to be,” they conclude.

Facebooktwitterredditpinterestlinkedinmail