Does The Appearance Of A Robot Affect Our Moral Expectations Of It?

As we interact more and more with machines in our daily lives, there has been a growing interest in how we judge the ethics of these machines.  This has been the bedrock of the Moralities of Intelligent Machines project conducted by the University of Helsinki.  The latest study conducted as part of this project explores how the appearance of robots affects our moral expectations of them.

Participants were asked to read a narrative in which the robot took various levels of humanoid appearance.  The robot encounters a moral problem akin to the trolley dilemma, with the volunteers asked to assess the morality of the robot’s decisions.

Lifelike expectations

The results suggest that when robots look more like humans we tend to have more human-like expectations of them.  In other words, we viewed the choices made by humanoid robots as less ethically sound than those made by robots with a more traditional robot-like appearance.

“Humanness in artificial intelligence is perceived as eerie or creepy, and attitudes towards such robots are more negative than towards more machine-like robots,” the researchers say. “This may be due to, for example, the difficulty of reacting to a humanoid being: is it an animal, a human or a tool?”

The authors believe that their findings highlight how we’re generally accepting of the concept of robots making moral decisions, and those decisions can be considered on a par with those made by humans.  The appearance of the robot does appear to make a difference, however.

Moral judgments

The number of machines making moral decisions and choices is growing across society, so the researchers believe that their findings are important in helping us understand how people will respond to their decisions.

“It’s important to know how people view intelligent machines and what kinds of factors affect related moral assessment,” they explain. “For instance, are traffic violations perpetrated by a stylish self-driving car perceived differently from those of a less classy model?”

The researchers believe that their findings should influence the development of both AI and robotics in the coming years, while also helping to inform political discussions in areas such as regulation.

“What kind of robots do we want to have among us: robots who save five people from being run over by a trolley, sacrificing one person, or robots who refuse to sacrifice anyone even if it would mean saving several lives?” the authors conclude.  “Should robots be designed to look like humans or not if their appearance affects the perceived morality of their actions?”

Facebooktwitterredditpinterestlinkedinmail