Ever tried to convince a conspiracy theorist that the moon landing wasn’t faked? You probably didn’t get far. But new research from MIT’s Sloan School of Management suggests that ChatGPT might do a better job. The study shows that large language models can reduce people’s belief in conspiracy theories, with effects lasting at least two months. This discovery sheds light on why conspiracy beliefs are so stubborn and offers a potential tool to combat their spread.
Conspiracy theories—claims that major events are secretly controlled by powerful people—have always fascinated and worried society. They seem to stick around despite evidence to the contrary. The common view is that these beliefs satisfy deep psychological needs, making them immune to facts. Once someone buys into a conspiracy, it’s assumed they can’t be reasoned out of it.
Lack of evidence
The researchers weren’t convinced. They wondered if people just hadn’t seen strong enough evidence to challenge their beliefs. Conspiracy theories vary a lot, with each person holding different versions based on specific arguments. So, if you haven’t heard those particular arguments, you might struggle to counter them.
To test their idea, the researchers used GPT-4 Turbo, an advanced AI model, to talk with more than 2,000 people who believed in various conspiracy theories. Each participant described the conspiracy they believed in, along with their reasons. The AI then used that information to have a tailored conversation, challenging the person’s belief with personalized evidence and reasoning.
These exchanges, which lasted about eight minutes on average, allowed the AI to directly address the specific reasons people believed in their conspiracies. Before AI, this kind of individual back-and-forth wasn’t possible at scale. The results were surprising: belief in the conspiracy dropped by about 20%, and one in four participants disavowed the theory entirely. Even two months later, the effects remained.
Changing minds
What’s more, the AI was able to reduce belief across a wide range of conspiracy theories, from COVID-19 misinformation to claims of election fraud in the 2020 U.S. presidential race. While the AI had less success with people whose conspiracy beliefs were central to their worldview, it still made an impact, with little difference in results across different demographic groups.
The study also found that the conversations didn’t just change beliefs—they influenced behavior. After talking with the AI, participants were more likely to unfollow conspiracy theorists online and more willing to challenge conspiratorial views in discussions.
The researchers point out that AI could be used to spread false beliefs as easily as it can debunk them, so it’s important to handle the technology responsibly. Still, they see great potential in using AI to reduce the spread of conspiracy theories. For instance, AI could be embedded in search engines to provide accurate information when people look up conspiracy-related topics.
The researchers conclude that relevant evidence matters more than we think—if it’s targeted to the beliefs people hold. This finding has broader implications beyond conspiracy theories. Any belief based on weak evidence might be weakened through this approach.
Changing research
This study also shows how AI can change social science research. In the past, psychology experiments were limited to small groups, often students, which constrained both scale and depth. AI allows researchers to reach more people while keeping the conversation personalized and detailed.
The findings challenge the idea that conspiracy believers are beyond reasoning. Instead, many are open to changing their views when presented with the right kind of evidence. As the researchers put it, “Much of the time, people just didn’t have the right information.”
Additionally, members of the public interested in this ongoing work can visit a website and try out the intervention for themselves.
![Share on Facebook Facebook](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/facebook.png)
![Share on Twitter twitter](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/twitter.png)
![Share on Reddit reddit](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/reddit.png)
![Pin it with Pinterest pinterest](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/pinterest.png)
![Share on Linkedin linkedin](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/linkedin.png)
![Share by email mail](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/mail.png)