Rumors abound that much of Elon Musk’s attack on the American civil service is a thinly veiled attempt to automate many of the tasks previously done by humans. While this is done under the guise of improving efficiency, it neglects the question of how other humans, whether those being served by the technology or those having to work with the technology feel about the arrangement.
Would we like our judicial rulings delivered by an AI judge, for instance, or to be managed by an algorithm? Research from the Max Planck Institute for Human Development set out to find out.
Widespread concerns
The study reveals widespread concerns about AI replacing humans, with clear cultural differences in how willing people are to have AI play a greater role. The researchers looked at AI in management, journalism, religion, medicine, law, and healthcare.
“Fears of AI are associated with the mismatch between psychological traits people deem necessary for an occupation and the perceived potential of AI to possess these traits,” the researchers explain.
They quizzed over 10,000 people from 20 countries covering the United States, Asia, and Europe, with each participant tasked with evaluating the aforementioned professions in terms of eight different psychological traits, including warmth, fairness, imagination, and competence. The aim was to gauge whether people thought the traits they regarded as central to each profession could realistically be replicated by AI.
“Our model distinguished people’s psychological requirements from the perceived potential of AI on these requirements, which successfully predicted fears about AI being deployed in an occupation,” the researchers explain.
A quick comparison
The findings show that when tasks are automated, people naturally compare the AI to the human that previously performed the task. This is not only in terms of competence but also a wide range of other traits and characteristics. The results show that the level of fear around automation is directly linked to the mismatch people perceive between the capabilities of the technology and the humans it is replacing.
Perhaps unsurprisingly, there were widespread differences in the level of fear people felt depending on their nationality, with people from the United States and India among those with the highest fear of automation. This fear was especially high in areas such as medicine and the law. The fear was considerably lower in China and Japan, with the researchers suggesting this was primarily due to a mixture of cultural factors and the media narrative around AI and automation. Germany was somewhere in between, with a degree of cautious optimism towards the technology.
There were also clear differences in terms of occupation. For instance, nearly every country expressed fear about the automation of the legal process, with universal support for judges to remain human. Sadly, for me at least, people were seemingly okay with the automation of journalism, with the researchers suggesting this might be due to the perceived autonomy over how people interact with the information provided by journalists.
The need for empathy
Among the biggest areas of concern was in professions that are typified by empathy. For instance, in healthcare, people across the board were concerned about the introduction of AI, not because it lacks ability but because it’s perceived as lacking in empathy and emotional understanding.
Similarly, the research highlighted longstanding concerns about AI acting in a managerial capacity. The researchers highlight previous work that shows how negatively many of us react to being managed by an algorithm, with this far more harmful than having an AI colleague or using AI to assist us in some way.
“Adverse effects can follow whenever AI is deployed in a new occupation,” the researchers say. “An important task is to find a way to minimize adverse effects, maximize positive effects, and reach a state where the balance of effects is ethically acceptable.”
Deploying the tech
The researchers believe their work underlines the important link between our expectations of certain roles and our perceptions about the capabilities of AI. It builds on a study I looked at in a recent article, which showed that those most likely to adopt AI often have the least knowledge of the technology. They have an almost blind belief in its seemingly magical powers that is fueled by a general ignorance about what the technology can and cannot do.
That study touched more on the capabilities of the technology rather than the perceived empathy and emotional intelligence of the humans it might replace, but between them, they underline the value in understanding the value we place on human-centric roles. By acknowledging this, we can build trust in AI more successfully.
“A one-size-fits-all approach overlooks critical cultural and psychological factors, potentially adding barriers to the adoption of beneficial AI technologies across different societies and cultures,” the authors say.
Alleviating fears
So how can these fears be alleviated? The researchers believe it has to be done on a case-by-case basis, taking into account both national and cultural differences as well as our expectations of each profession.
For instance, many of the fears around AI in medicine are around things like sincerity and empathy, and the researchers believe these could be alleviated by making the technology more transparent. It could also be positioned as a decision support tool rather than something that replaces humans altogether.
Likewise, for judges, a key concern is around fairness, so algorithms should emphasize how they might make things more fair and less biased than is currently the case, and also be transparent about how they might do so.
With AI adoption still somewhat piecemeal, hopefully these lessons will be taken on board over the next few years as the technology is rolled out in more domains.





