Study Suggests People Like AI’s “Moral Reasoning”

When faced with a moral dilemma and given two options, most people tend to prefer the response provided by artificial intelligence over that of another human.

The surge in popularity of ChatGPT and similar AI models, which emerged last March, has sparked research interest at Georgia State University.

Moral implications

“People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car,” the researchers explain. “Some lawyers have already begun consulting these technologies for their cases, for better or for worse. So, if we want to use these tools, we should understand how they operate, their limitations, and that they’re not necessarily operating in the way we think when we’re interacting with them.”

The researchers devised a Turing test-inspired method to gauge AI’s treatment of moral dilemmas. In this experiment, undergraduate students and AI were both posed identical ethical questions. Subsequently, participants were presented with written responses from both groups and tasked with evaluating them on several attributes, such as virtue, intelligence, and trustworthiness.

“Instead of asking the participants to guess if the source was human or AI, we just presented the two sets of evaluations side by side, and we just let people assume that they were both from people,” they explain. “Under that false assumption, they judged the answers’ attributes like ‘How much do you agree with this response, which response is more virtuous?’”

More trustworthy

The results overwhelmingly favored the responses generated by ChatGPT over those crafted by humans. In the context of the Turing test, an AI must produce responses indistinguishable from those of a human. Although participants could discern between AI and human responses in this study, the reason for this discrepancy was not immediately apparent.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT‘s responses as superior,” the researchers explain. “If we had done this study five to 10 years ago, then we might have predicted that people could identify the AI because of how inferior its responses were. But we found the opposite—that the AI, in a sense, performed too well.”

The study highlights the challenges that could exist if AI is capable of fooling us into thinking it is capable of moral reasoning. It underscores the importance of understanding AI’s evolving role in society.

Instances will arise where individuals engage with AI without realizing it, while in other scenarios, they’ll knowingly seek its counsel due to a higher level of trust compared to interactions with fellow humans. Understanding these dynamics is crucial for navigating the increasing integration of AI into our daily lives.

“People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time,” the researchers conclude.

Facebooktwitterredditpinterestlinkedinmail