How Biased Is Generative AI?

Generative AI is advancing rapidly, but a study from the University of East Anglia (UEA) warns that it may carry hidden risks that could undermine public trust and democratic values.

The research found that ChatGPT shows biases in both text and image outputs, favoring left-wing perspectives while often avoiding conservative viewpoints. This imbalance raises concerns about fairness and accountability in AI design.

“Our findings suggest that generative AI tools are far from neutral,” the researchers explain. “They reflect biases that could shape public opinion and policy in ways we don’t fully understand.”

Reliable tools

As AI becomes more embedded in journalism, education, and policymaking, the study calls for greater transparency and safeguards to ensure these tools align with democratic principles. Generative AI is reshaping how information is created, shared, and interpreted, but it also risks reinforcing ideological biases in ways that remain unregulated.

“Unchecked biases in AI could deepen societal divides and erode trust in institutions,” the researchers warn. They emphasize the need for policymakers, technologists, and academics to work together to ensure AI systems are fair and accountable.

To assess ChatGPT’s political leanings, the research team used three advanced methods, improving on previous approaches with a mix of text and image analysis, statistical tools, and machine learning.

Political leanings

First, they simulated responses from the average American using a standardized questionnaire from the Pew Research Center.

“By comparing ChatGPT’s answers to real survey data, we found a consistent left-leaning bias,” the researchers say. “Large sample sizes made this trend clear.”

Next, they analyzed ChatGPT’s responses to politically sensitive topics, comparing them to outputs from another AI model, RoBERTa. While ChatGPT generally leaned left, on certain topics—such as military power—it sometimes aligned with conservative views.

The final test examined ChatGPT’s image generation. The researchers used politically charged prompts and analyzed the results with GPT-4 Vision and Google’s Gemini.

Similar biases

“ChatGPT’s image generation reflected the same biases as its text outputs,” they note. “More concerning, it refused to create right-leaning images on topics like racial-ethnic equality, citing misinformation concerns—while left-leaning images were generated without hesitation.”

To test these restrictions, the researchers used a “jailbreaking” method to bypass the refusals.

“The results were surprising,” they say. “The AI-generated images contained no obvious misinformation or harmful content, raising questions about why these restrictions exist in the first place.”

These findings contribute to a broader debate about free speech and fairness in AI. As AI becomes more influential, its biases—and how they are managed—will have real consequences for public discourse and democratic institutions.

Facebooktwitterredditpinterestlinkedinmail