As artificial intelligence (AI) plays a growing role in journalism, newsrooms are facing a dual challenge: how to use the technology effectively and how to explain its use to readers. A study from the University of Kansas reveals a key obstacle: readers tend to trust news less when they believe AI is involved, even if they don’t fully understand its role.
The study highlights a paradox. Readers are aware of AI’s presence in news production and often view it negatively, but they struggle to grasp what AI actually does. This gap in understanding complicates efforts to disclose AI’s contributions in a way that builds trust.
Experimenting with Perceptions
To explore how AI affects readers’ trust, researchers conducted an experiment using a news article about aspartame’s safety. The article was presented with five different bylines:
- Written by staff writer
- Written by staff writer with AI tool
- Written by staff writer with AI assistance
- Written by staff writer with AI collaboration
- Written by AI
The article’s content remained unchanged across all versions. Readers were then asked what they thought the byline meant and how credible they found the story.
The findings show that readers often misinterpret bylines. Many assumed AI was involved even when it wasn’t explicitly mentioned, interpreting “staff writer” as a possible indicator of AI use. Those who believed AI had contributed viewed the article as less credible, regardless of the byline.
This skepticism stems from assumptions about AI’s role. Readers imagined AI performing tasks like research or drafting, but the lack of clear disclosure left them guessing. This sensemaking—filling in gaps based on prior knowledge—often led to negative conclusions.
The Transparency Dilemma
A second paper from the research team examined how perceptions of “humanness” influenced trust. It found that readers valued transparency but trusted articles more when they believed humans had done the bulk of the work. The more AI was perceived to be involved, the less credible the article seemed.
The study underscores a fundamental point: fields traditionally dominated by human expertise, like journalism, face unique challenges when introducing AI. While AI-driven recommendations in tech fields, such as YouTube’s algorithms, raise few concerns, readers hold journalism to a different standard.
The researchers call for clearer communication about AI’s role in journalism. Simply stating that AI was used is not enough. Specific explanations of how AI contributed—whether as a research tool or an editorial assistant—are essential to maintaining trust.
Recent controversies, such as allegations of AI-generated articles being published under human bylines, highlight the risks of unclear disclosure. Transparency not only builds trust but also avoids damaging journalism’s credibility.
Next Steps for Newsrooms
The study suggests that journalism can benefit from better education for readers about how news is created. Readers often misunderstand industry terms like “byline” or practices like corrections and ethical training. Bridging this gap could help align readers’ perceptions with journalistic standards.
“People trust humans in professions traditionally done by humans,” the researchers conclude. “To preserve that trust, we need to be clear about what AI does and doesn’t do in journalism. Transparency is not just a courtesy—it’s essential for credibility.”





