People who doubted climate change or the Black Lives Matter (BLM) movement and chatted with a smart AI bot didn’t love the experience. But something interesting happened—they ended up leaning more towards supporting the scientific view on climate change or BLM. This comes from research at the University of Wisconsin–Madison, where scientists checked how this AI bot, GPT-3, handled talks with people from different backgrounds.
In today’s world, we often find ourselves talking not just to other people but to computer programs that act like humans. The researchers wanted to see how well GPT-3, a kind of super-smart language model that came before ChatGPT, could handle tricky conversations about climate change and BLM. Between late 2021 and early 2022, over 3,000 folks had real-time chats with GPT-3.
Changing views
Turns out, even if skeptics were let down at first, chatting with this AI bot made them more supportive of what scientists say about climate change or BLM. The study shows how things are changing in our conversations—while we’re good at adjusting to each other’s beliefs and expectations, we’re also talking more to these advanced computer models that shape how we talk.
“The fundamental goal of an interaction like this between two people (or agents) is to increase understanding of each other’s perspective,” the researchers explain. “A good large language model would probably make users feel the same kind of understanding.”
Participants were told to start a chat with GPT-3 using a setup designed by Burapacheep. They were given the freedom to talk about climate change or BLM in whatever way they wanted. On average, the chats went back and forth about eight times.
A happy experience
Interestingly, most people finished their chat feeling pretty satisfied with the experience.
“We asked them a bunch of questions—Do you like it? Would you recommend it?—about the user experience,” the authors say. “Across gender, race, ethnicity, there’s not much difference in their evaluations. Where we saw big differences was across opinions on contentious issues and different levels of education.”
About a quarter of the participants who didn’t quite agree with the widely accepted scientific views on climate change or the BLM movement were not too happy with their talks with GPT-3. Compared to the other 75% of chatters, these folks rated the bot lower, giving it scores half a point or more below on a 5-point scale.
Interestingly, even though they weren’t super thrilled with the bot, the conversations did make them think differently about these important topics. The group that was initially less supportive of the facts about climate change and its human-driven causes shifted about 6% closer to the more supportive end of the scale.
“They showed in their post-chat surveys that they have larger positive attitude changes after their conversation with GPT-3,” the researchers say. “We don’t always want to make the users happy. We wanted them to learn something, even though it might not change their attitudes.”