Isaac Asimov’s first law famously remarks that robots should not do anything that harms humans. While causing physical harm might be quite clear-cut, psychological harm is rather harder to qualify. Would lying count, for instance? Whether lying is a good or bad thing has been a topic of philosophical debate for many years, but it has very real implications when it comes to how we program robots.
Research from the Queensland University of Technology explores how people react when robots deceive them. For humans, a loose consensus is that lying is acceptable if doing so in some way protects the other person from harm. The question is, should the same rules apply to robots? Is deceiving acceptable if doing so is part of the greater good (and who gets to decide what that is)?
The case for lying
The researchers found that there are a number of different types of lies, and these define what we expect from the technologies we work with. They explain that robotic companions cover a wide spectrum of activities, to vacuum-cleaning robots in the home to huge industrial robots in factories.
By and large, these machines don’t really engage in anything pertaining to thought, but this might be about to change, with machines increasingly capable of not only interacting with us but also taking a more active role in those interactions.
This introduces the prospect of the technology deliberately lying. The researchers suggest it could do so in a way that deceives about something other than itself; or it could deceive about its ability to perform a task; or it could even hide its ability to perform a task, thus feigning ignorance.
The researchers developed a number of scenarios that were based on these various forms of deception, before presenting each of them to a pool of around 500 people. The respondents were asked to determine whether the behaviors of the robot were deceptive and also whether they thought the behavior was acceptable or not.
Varying degrees
The responses showed that people typically found all of the types of lies to be deceptive, but some were viewed as more acceptable than others. For instance, lies whereby the robot deceived about things other than itself were seen as acceptable, while the other types of lies were not. This was especially so if this kind of lie spared harm to someone.
In scenes reminiscent of the hit play Spillikin, one scenario involved a companion robot lying to a woman with Alzheimer’s by saying that her husband was still alive. Respondents largely thought that the robot was trying to spare the woman from harm and was doing good.
Scenarios involving the other forms of lying were nearly always viewed less favorably, however. For instance, in one, a housekeeping robot deceived its owners of the fact that it was also recording videos as it went about its work. Hardly any respondents thought such deception was valid, even though many people thought the owner of the robot was ultimately responsible for the deception.
Similarly, a factory-based scenario saw the robot grumbling about the laborious nature of the work, despite obviously being unable to feel any kind of physical pain. This kind of deception was also viewed negatively by respondents, even though it’s not harming anyone.
To lie or not to lie
Obviously for us humans, there have been copious debates around the nature of lying and whether there are acceptable reasons for doing so. For technologists, there have been similar debates, not only about the nature of lying but also whether it’s worthwhile doing in order to help technology fit in with us.
The study gives us a glimpse into how humans and technology might interact as the interface between us becomes more integral to working life. We know from previous studies that it’s crucial for humans to trust their AI companions, and lying can harm that trust, but this study reminds us that not all lies are created equally.
It’s perhaps fair to say that the study poses as many questions as it provides answers, not least of which is whose job is it to actually determine whether a lie is justified or not. It’s probably easiest to just assume that lying is wrong, even if it might be justified by some.
![Share on Facebook Facebook](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/facebook.png)
![Share on Twitter twitter](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/twitter.png)
![Share on Reddit reddit](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/reddit.png)
![Pin it with Pinterest pinterest](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/pinterest.png)
![Share on Linkedin linkedin](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/linkedin.png)
![Share by email mail](https://adigaskell.org/wp-content/plugins/social-media-feather/synved-social/addons/extra-icons/image/social/clearslate/96x96/mail.png)