Robots has certainly come a long way in a relatively short space of time, with automated cars progressing at a rapid pace and robots making headway in areas as diverse as bricklaying and ship building.
With robots playing an ever greater part in our life, the very form robots should take is becoming more urgent. A recent study suggests that we should be looking to build robots with the same kind of flaws we humans so frequently exhibit.
Building imperfect robots
The researchers suggest that we are much more likely to form productive working relationships with robots when those robots have flaws and imperfections.
We’ve seen examples in recent times of robots being deployed in caregiving situations, and whilst robots are growing increasingly intelligent, the authors believe that making robots too perfect is likely to place a barrier between us and them.
It was found that we’re much more likely to warm to an interactive robot if it contains various cognitive biases. The kind of biases that form such a fundamental part of human character.
“Our research explores how we can make a robot’s interactive behaviour more familiar to humans, by introducing imperfections such as judgemental mistakes, wrong assumptions, expressing tiredness or boredom, or getting overexcited. By developing these cognitive biases in the robots – and in turn making them as imperfect as humans – we have shown that flaws in their ‘characters’ help humans to understand, relate to and interact with the robots more easily,” the authors say.
Most robot-human interactions at the moment function based on clearly defined rules and behaviors. The researchers wanted to add in some cognitive biases to those processes, including misattribution of memory (ie forgetting) and empathy gaps. These are both common factors in human to human interactions, but usually factored out of robot behaviors.
Emotional robotics
Central to the study was the ERWIN (Emotional Robot with Intelligent Network) developed by the Lincoln team. It is capable of expressing five basic emotions.
The researchers monitored the way the robot was interacting with human participants. In half of these interactions, the robots were programmed to have various cognitive biases. The other half of interactions simply saw ERWIN make various factual mistakes.
Each participant was then asked to rate how their experience with ERWIN was. It transpired that those engaging with the more ‘human’ robot were much happier with the experience.
“The cognitive biases we introduced led to a more humanlike interaction process,” the researchers say. “We monitored how the participants responded to the robots and overwhelmingly found that they paid attention for longer and actually enjoyed the fact that a robot could make common mistakes, forget facts and express more extreme emotions, just as humans can.”
They suggest that this is largely because our perception of robots is fuelled by science fiction, which renders the machines superior and distant, and therefore not very likeable. If we’re to engage with them on a more personal basis therefore, something needs to change.
“As long as a robot can show imperfections which are similar to those of humans during their interactions, we are confident that long-term human-robot relations can be developed,” they conclude.
I would think that we shouldn't pay too much attention on humanoid robot, at least at this stage when technology is not able to support the needs of developing human-like functionalities for robots. It is not that necessary and many arguments would not stand for lifelong interaction. Instead, the relationship between us and out pets worth more attention.
Really? Now we're the imperfect ones, and the machines we make are the perfect ones?
I'd say this is at least a decade away.
A cold, unemotional robot incapable of interacting with human beings? Add software to make it obsessed by money, and you've built my ex-wife.
I would have thought that as long as they're designed by human beings then they'll have flaws in some shape or form, albeit maybe not 'human' flaws as such.