Algorithms are being deployed in a growing number of ways in the workplace, from detecting diseases in healthcare to devising lesson plans in education. With many of these use cases, however, there is a sense that people are using the technologies as much to experiment than because they have clearly defined outcomes that they’re achieving because of them.
Recent research from the University of Michigan suggests that few of these deployments are actually delivering the results people hope for. This is due, in part, to hesitancy from workers to actually make use of the technology available to them.
Algorithm aversion
The researchers found that despite the seemingly universal adoption of technologies like ChatGPT, many workers still display an aversion to using algorithms at work. They wanted to explore quite why that might be and how such avoidance might play out for customer-facing teams.
“We often hear people talk about ideal visions for algorithms,” the researchers explain. “One vision is that these tools will help people make really good decisions, quickly. But often this vision is based on the algorithm’s performance in a vacuum; it’s abstracted from the context where people are making those decisions.”
The study was born out of the direct experience of the researchers themselves, who had actually designed AI-based tools as part of their studies. It can be easy, they say, for designers to get wrapped up in the huge productivity and efficiency gains that will be achieved by their product, which will, in turn, fuel mass adoption in the workplace. Designers seldom take account of the more human elements that affect adoption and implementation, however.
How we engage with technology
They did this by analyzing how around 400 people interacted with AI-based technology. The analysis found that aversion to algorithms can have a wide range of implications on how we decide which technologies to use and which to adopt, some of which affect us in unexpected ways.
One of these ways was that people who tended to have an aversion to using AI also seemed to want to make fast decisions, even if those aren’t necessarily good decisions.
While we may assume that AI will make us faster and more efficient, this doesn’t guarantee that outcomes are better. That’s what the study found, as workers were often slower when using AI because the recommendations provided by the tools weren’t always good ones or the recommendations weren’t acted upon by employees in an effective manner. This was especially so when there was a lack of trust in the technology by employees.
Food for thought
The researchers believe that their findings should provide both leaders and tech developers alike with food for thought when it comes to designing and implementing the latest technologies in their operations.
“You may spend a lot of time developing an algorithm for a lofty goal,” they explain. “However, workers have to trust an algorithm to take its advice quickly. You can’t expect efficiency until they’ve had time to get the information about the algorithm’s good performance.
“Even then, the other conditions, such as workload and time pressure, have to be right.”
Understanding your goals
The study provides a timely reminder that before managers seek to implement technology into the workflow of their team, it’s vital that they have a clear idea of what they’re trying to achieve. The results remind us that there’s no one way employees will respond, and while some may quickly adopt what the technology suggests, others will spend longer deliberating the recommendation (or reject them entirely).
“Organizational leaders need to think about which of those actions is preferable,” the researchers explain. “Is the algorithm primarily there to help workers speed up or to improve worker accuracy?”
As technology becomes a more frequent feature in our working lives, these kind of studies will become increasingly important in helping us understand and manage the interface between man and machine. This interface is only likely to become more complicated as generative technologies continue to develop.
“We can build trust with generative AI, or not, kind of like we build trust with other people: slowly, over time, by learning what it can and cannot do well,” the authors conclude. “While this, in theory, might be a good thing for reducing aversion, generative AI technologies are constantly changing and often inconsistent, making that trust hard to build—at least right now.”





