Chatbots may be increasingly functional, but one thing we seldom expect from them is much in the way of emotion. Research from the Georgia Institute of Technology pondered whether the emotions that we welcome in customer service interactions with humans are also welcomed in interactions with AI.
“It is commonly believed and repeatedly shown that human employees can express positive emotion to improve customers’ service evaluations,” they explain. “Our findings suggest that the likelihood of AI’s expression of positive emotion to benefit or hurt service evaluations depends on the type of relationship that customers expect from the service agent.”
Expressing emotion
They conducted three studies to explore the role of emotional AI in customer service interactions. The chatbots would display a range of positive emotional adjectives, as well as use more exclamation marks in their messages.
The results suggest that while positive emotions were generally well received when displayed by humans, they had no effect when bots displayed them. In follow up studies, it emerged that people were more receptive to an emotional chatbot if they were more socially oriented themselves as opposed to more transaction oriented. For this latter group, an emotional bot made the interaction seem much worse.
“Our work enables businesses to understand the expectations of customers exposed to AI-provided services before they haphazardly equip AIs with emotion-expressing capabilities,” the researchers explain.
A final study then revealed that people don’t generally expect AI technology to express emotion and so can react negatively when they engage with a platform that does. As a result, the authors urge a degree of caution when deploying chatbots with emotions inbuilt into their character, as it’s unlikely that companies will know the biases or expectations of customers.
“Our findings suggest that the positive effect of expressing positive emotion on service evaluations may not materialize when the source of the emotion is not a human,” the researchers conclude. “Practitioners should be cautious about the promises of equipping AI agents with emotion-expressing capabilities.”