The modern consumer interface sees a combination of humans and technologies, such as chatbots, used to interact with consumers. Research from the University of Kentucky suggests that we should be mindful of which channel we use depending on whether we’re delivering good news to the consumer or bad news.
The study suggests that when things have gone worse than the customer has expected, it works out better if they deal with a chatbot, but when things have gone better than expected, they respond better to a human agent.
“This happens because AI agents, compared to human agents, are perceived to have weaker personal intentions when making decisions,” the researchers explain. “That is since an AI agent is a non-human machine, consumers typically do not believe that an AI agent’s behavior is driven by underlying selfishness or kindness.”
The inherent lack of underlying intent from the AI means that we assume that a chatbot will neither need to be punished for selfish intent in the wake of an unfavorable outcome nor rewarded for benevolent intent.
Consumer responses
“For a marketer who is about to deliver bad news to a customer, an AI representative will improve that customer’s response,” the researchers continue. “This would be the best approach for negative situations such as unexpectedly high price offers, cancellations, delays, negative evaluations, status changes, product defects, rejections, service failures, and stockouts.”
As such, the authors argue that firms would be well placed to use human agents in more negative situations and AI-based agents where positive outcomes are more likely. Indeed, even when the role isn’t passed to a chatbot entirely, the use of one to disclose particular bits of negative information could still be useful.
Even if companies are using chatbots entirely for customer engagement, the researchers believe that they can influence consumer feedback based upon whether the chatbot is designed to look more or less humanlike.
“We hope that making consumers aware of this phenomenon will improve their decision quality when dealing with AI agents, while also providing marketing managers techniques, such as making AI more humanlike in certain contexts, for managing this dilemma,” the authors conclude.