You might think that it would be a very rare thing indeed for people to value a piece of hardware over a human life, yet new research from Radboud University suggests that such circumstances may exist. Bizarrely, one of these circumstances might involve a perception that the robot will feel pain.
“It is known that military personnel may mourn a robot that is used to clear mines in the army. Funerals are organised for them. We wanted to investigate how far this empathy for robots extends, and what moral principles influence behaviour towards robots. Little research has been done in this area as of yet,” the authors explain.
Sacrificing the machine
What extent would we be willing to sacrifice machines if it meant saving a human life? Volunteers to the research were presented with a number of moral dilemma scenarios whereby they would have to save an individual to save a group of wounded people. In some scenarios that individual was a human, in some it was a humanoid robot, whilst in others it was a more regular piece of machinery.
The results showed that when the robot was humanoid in style, it presented a much sterner dilemma for the volunteers. When it was designed in a humanoid style and presented as having its own thoughts and emotions, the participants were less likely to sacrifice the machine for anonymous humans. It’s a finding that the researchers believe highlight how people can bestow certain moral values on robots in the right circumstances.
“A human-looking robot can cause feelings and behaviours that contrast with the function for which they were developed—to help us. And the question is whether this is desirable for us,” they explain.
Emphasizing with the machine
This should perhaps not come as that big a surprise. Research published a few years ago in Nature highlighted the ability of people to form emotional bonds with machines.
The research, conducted by a Japanese team, believes it has found the first neurophysiological evidence of our ability to empathize with a robot in apparent pain, albeit at a slightly different level to that shown towards other humans.
The study saw EEG tests performed on a, relatively small, sample of adults who were shown pictures of either a human or robotic hand that was in a painful situation.
The results were fascinating. Participants did show empathy towards the robot, but at a lower level than to the humans in the picture.
“The ascending phase of P3 (350-500 ms after the stimulus presentation) showed a positive shift in the observer for a human in pain in comparison with the no-pain condition, but not for a robot in perceived pain. Then, the difference between empathy toward humans and robots disappeared in the descending phase of P3 (500-650 ms)”, the authors say, “The positive shift of P3 is considered as reflecting the top-down process of empathy. Its beginning phase seems related to the process of perspective taking, as was shown in a previous study.”
So it’s perhaps not that surprising that people are increasingly willing, and able, to exhibit an emotional connection with technology, which in turn distorts what would appear to be rational moral responses towards them.