Last week I wrote about an interesting study that explored the impact social robots were having in the classroom. The same team have returned with a second study looking at whether robots have a positive or negative influence on student behavior.
The researchers conducted a conformity experiment that was popularized by psychologist Salomon Asch in the 1950s to test how much influence groups can have on the behavior of individuals.
“The test subjects are tasked with evaluating a visual image, and they hear the incorrect assessment from the others in the group – who are all ‘in’ on the experiment,” the researchers explain.
Social influence
Ordinarily the experiment would feature a group of humans, but this latest study wanted to test whether social robots would have the same influence. The team paired up a number of young people with social Nao robots, which are able to speak and gesticulate.
The researchers wanted to test the influence the robots would have both on adults and children. In the first experiment they tested whether adults would adjust their assessment level based upon the input of the robots, whilst the second experiment would be tested on a classic Müller-Lyer illusion with four lines of varying lengths and they would be asked to say which of them were the same length.
The results revealed that whilst the adults were able to withstand influence from robots that they would ordinarily not be able to from humans in similar circumstances, the children in the experiment were not, and were swayed by the machines.
Whilst the researchers aren’t yet clear on the reasons for this, they believe that the relatively small size of the Nao robots might play a part, as they are on the same level as the children.
The team believe that their findings have significant implications on the effective use of robotics in the classroom, especially as robots such as Nao are not really built with children in mind, so their influence may be an unintended side effect.
“There are applications in which having influence is advantageous, such as in healthcare or education,” the authors explain. “But of course we cannot disregard abuse or erroneous use. For example, how do we deal with a situation in which several robots in a store advertise a product and get a person to buy it even though they would not have done so otherwise? Other risks include cases in which autonomously learning robots draw incorrect conclusions from their sensory data and then go with this to people who trust the robot’s assessment.”