There has long been a view in robotics that falling into the ‘uncanny valley’ is undoubtedly a bad thing. The argument goes that when robots start appearing a bit too much like humans, people get a bit freaked out, and respond negatively.
Of course, much of the challenge is that people know they’re not humans, but they look a bit too lifelike for comfort. A new study from New York University explores what happens when people engage with machines that they actually believe are human, and then how those relationships change when the true identity of the bot is revealed.
The analysis found that the machines can be more efficient at various tasks than their human colleagues, but this efficiency tends to rely on them being able to hide their true status.
Effective cooperation
Participants were asked to complete a cooperation game that was based upon the Prisoner’s Dilemma, in which their partner was either a human or a bot. The players had to decide whether to act selfishly or cooperate with their colleague.
The researchers shared the identity of their colleague, but sometimes, this transparency was false, with some of the volunteers told they were engaging with a bot when they were actually playing with a human, and vice versa.
The team hoped the experiment would not only uncover any prejudices people have towards colleagues they perceive to be machines, but how this prejudice affects the efficiency of the machine itself.
The data suggests that when machines were presented as humans, they were much better at encouraging their partner to cooperate. When this pretense fell however, and their true identity was revealed, the cooperation rates dropped considerably.
“Although there is broad consensus that machines should be transparent about how they make decisions, it is less clear whether they should be transparent about who they are,” the researchers say. “Consider, for example, Google Duplex, an automated voice assistant capable of generating human-like speech to make phone calls and book appointments on behalf of its user. Google Duplex’s speech is so realistic that the person on the other side of the phone may not even realize that they are talking to a bot. Is it ethical to develop such a system? Should we prohibit bots from passing as humans, and force them to be transparent about who they are? If the answer is ‘Yes’, then our findings highlight the need to set standards for the efficiency cost that we are willing to pay in return for such transparency.”