Chatbots are increasingly common in customer service environments. Indeed, some estimates suggest that 95% of online customer service will be provided by them by 2025. Research from the Queensland University of Technology suggests that such a strategy is not without risks.
They found that while chatbots can be an effective medium for customer service, they can also infuriate customers, thus making them less likely to make their purchase and also generating a significant degree of anger.
Talking gibberish
Research from Columbia University found that part of the problem is that AI-based chatbots have an unhelpful tendency to talk gibberish. The study found that chatbots would often believe a sentence was meaningful and helpful that human users found complete nonsense.
“That some of the large language models perform as well as they do suggests that they capture something important that the simpler models are missing,” the researchers explain. “That even the best models we studied still can be fooled by nonsense sentences shows that their computations are missing something about the way humans process language.”
This muddled logic was further underlined by a recent paper from Cornell’s SC Johnson College of Business, which explored how humans and chatbots make decisions. The findings don’t suggest we can automatically rely on chatbots to make sound decisions.
Irrational decisions
“Surprisingly, our study revealed that AI chatbots, despite their computational prowess, exhibit decision-making patterns that are neither purely human nor entirely rational,” the researchers explain. “They possess what we term as an ‘inside view’ akin to humans, characterized by falling prey to cognitive biases such as the conjunction fallacy, overconfidence, and confirmation biases.”
The conjunction fallacy is a common reasoning error whereby we assume that certain, specific conditions, are far more probable than a single, more general condition. The confirmation bias is when we prefer information that supports our existing view rather than any that contradicts it.
AI chatbots provide an “outside view,” which can enhance human decision-making by offering a fresh perspective. They are good at using base rates and are less prone to biases that come from limited memory or overestimating the likelihood of events based on recent experiences. Unlike humans, who often overvalue things they own (a bias known as the endowment effect), AI chatbots don’t show this tendency.
In the study, the researchers looked at various AI platforms, including ChatGPT, Google Bard, Bing Chat AI, ChatGLM Pro, and Ernie Bot. They evaluated how these AI systems made decisions based on 17 principles from behavioral economics, shedding light on how humans and AI interact in decision-making processes.
Inexact mirroring
The study found that chatbots don’t really mirror human decision-making all that closely, and certainly not as closely as the researchers expected them to.
Indeed, despite being trained on huge datasets that might exhibit human decision making, the chatbots actually made decisions in ways that defy rational logic. For instance, the study found that whereas humans might take a gamble when facing a loss, chatbots would often do the opposite and look for a more certain outcome. In other words, they don’t tend to display the loss aversion that humans do.
If we’re to make appropriate use of chatbots in our professional lives, it’s vital that we understand how they work and how they differ from humans in how they make decisions.
While AI can be a useful tool, it’s important to approach it with a healthy dose of skepticism. Knowing when AI offers an “inside view” can help reduce the risks of overconfidence and confirmation biases. On the other hand, using the “outside view” that AI provides can improve decision-making by focusing on base rates and avoiding biases that humans often fall prey to.
As AI becomes more integrated into different areas of life, understanding how it makes decisions is increasingly important. This research highlights the strengths and weaknesses of AI, as well as its potential to enhance human decision-making.
“Exploring the unknown territory of AI decision-making has brought together diverse perspectives, paving the way for a deeper understanding of this rapidly evolving technology and its implications for society,” the authors conclude. “As we continue on this journey, we aim to foster responsible and informed usage of AI, ensuring that it serves as a tool for progress and empowerment in the hands of decision-makers.”