When humans make mistakes, we tend to be forgiving, especially if the mistakes are well-intentioned and whether we own up to them or not. Is the same the case when machines make a mistake? That was the question posed by recent research from the University of Michigan, which examined how robots can reestablish trust after making a mistake.
The study explored a range of different trust repair strategies, including denial, apologizing, promises of improvement, or explanations of the mistake. The results clearly show that some are more effective than others, with the best often dependent upon how the robot looks.
“Robots are definitely a technology but their interactions with humans are social and we must account for these social interactions if we hope to have humans comfortably trust and rely on their robot co-workers,” the researchers say.
“Robots will make mistakes when working with humans, decreasing humans’ trust in them. Therefore, we must develop ways to repair trust between humans and robots. Specific trust repair strategies are more effective than others and their effectiveness can depend on how human the robot appears.”
Repairing trust
The researchers worked with 164 volunteers who were asked to team up with a robot in a virtual environment in which they had to load boxes onto a conveyor belt.
The human in the team acted as the quality assurance, with the robot responsible for reading serial numbers for the boxes and loading 10 specific items onto the conveyor belt. In one scenario, the robot was designed to be anthropomorphic, while in another it was more mechanical in appearance.
Each of the robots was designed to make mistakes and to then either deny making the mistake, apologize for it, give an explanation for the mistake, or promise to do better next time. The researchers explain that previous works have explored the role of denial, promises, and apologies in the trustworthiness of robots, but they haven’t done so in terms of repairing trust.
Trusting again
The appearance of the robot appeared to make a big difference, with the humanoid robot finding it easier to regain integrity when it explained why the mistake was made, and regained benevolence when it used apologies or explanations.
Perhaps understandably, when the robot apologized it scored higher for both integrity and benevolence than when it denied making a mistake, with promises to improve also outscoring both apologies and denials on both measures.
The researchers plan to further examine the topic in a range of other contexts and with different kinds of mistakes being made to see if their findings persist.
“In doing this we can further extend this research and examine more realistic scenarios like one might see in everyday life,” the researchers conclude. “For example, does a barista robot’s explanation of what went wrong and a promise to do better in the future repair trust more or less than a construction robot?”