Readers Wary Of AI-Generated Headlines

News readers are wary of AI-generated headlines, often seeing them as less accurate. As AI content spreads across the internet, social media companies have started labeling such material. A study from the University of Zurich explored how these labels influence how headlines are perceived.

In two online experiments with nearly 5,000 participants from the US and UK, researchers had participants rate 16 headlines. The headlines were a mix of true, false, AI-generated, and human-written content.

The first experiment tested different labeling scenarios: (i) no headlines were labeled as AI, (ii) AI-generated headlines were marked as AI, (iii) human-written headlines were falsely labeled as AI, and (iv) false headlines were correctly marked as false.

Less reliable

The results were clear. Headlines labeled as AI-generated were viewed as less accurate, and participants were less willing to share them—whether the headlines were true or not, and whether they were actually created by AI or humans. However, labeling a headline as false had a much stronger effect than simply labeling it as AI-generated.

To better understand why people are skeptical of AI content, the researchers tested various explanations. They found that the aversion stemmed from the belief that AI-labeled headlines were fully written by machines without human oversight.

The researchers suggest that while transparency about AI-generated content is important, clearer definitions are needed. Mislabeling headlines as AI-generated could backfire, making them seem less reliable even when they are accurate. To make labeling more effective, false headlines should be clearly marked as false, rather than just AI-generated.

Facebooktwitterredditpinterestlinkedinmail