While it’s increasingly acknowledged that AI-based technologies will largely work alongside and augment the work of humans rather than replace them, the media continues to push the narrative of a battle between man and machine. Often this narrative presents the battle as one that humans cannot possibly win. It’s a narrative not supported by new research from the University of Notre Dame, which suggests that in customer service roles, the human foibles of customer service agents can sometimes help.
The researchers focus their attention on behavioral inertia, which governs our tendency to stick to the way we’ve always done things in the past.
“In general, this inertia costs time and money compared with the optimization you can get with automation,” the researchers say. “However, there are certain situations where inertia actually improves service center operations. When agents are experts, or when they are handling particularly complex, difficult calls, these inertial behaviors are beneficial in terms of efficiency and effectiveness.”
Optimum service
Obviously, there is a clear incentive for organizations to want to route calls to the right place in the shortest time possible, which is a major reason behind the rise in automation in this setting. It’s a process that has not always gone smoothly, however.
“Often the automation is awful, so you can never replace humans entirely,” the researchers explain. “Instead, we end up with combinations of humans and automation. It is critical to understand when one outperforms the other. This is particularly important now, since artificial intelligence technologies are increasingly being used in service centers. Services will inevitably involve humans working in conjunction with technologies, and it is critical to understand when the technology provides benefits and when the human does.”
The researchers suggest that routing is optimized in call centers based on a couple of key assumptions. Firstly, it’s assumed that agents will follow the guidance provided by the systems. Secondly, it is assumed that routing schemes are optimal in terms of efficiency and effectiveness, and are therefore better than humans.
“We find humans do not always follow the guidance as expected—as indicated by their behavioral inertia,” they explain. “And we find this inertia can be good when the agents are experts or when they are dealing with really difficult issues.”
“For example, a service center’s routing protocol may indicate that Agent A should route an issue to Agent B based on various factors such as length of queue or general expertise,” the researchers continue. “However, based on cognitive biases and social embeddedness, Agent A may route the issue to Agent C. While such routing discretion can hinder overall service center performance, we discovered that it is beneficial when the issue is particularly difficult and/or the agent has high expertise.”
As such, the authors recommend that companies strive to understand when humans may actually work better than AI and create sufficiently flexible processes to give humans discretion when it’s right for them to be given it, while still ensuring the AI systems give adequate support the rest of the time.