In today’s digital world, kids come across tons of stuff online. Some of it might not be true, and a bunch of it is made by machines like those language models driven by AI. As kids get older, they must learn how to tell if something is reliable or not – that’s a big part of thinking smart.
A study from the Singapore University of Technology and Design found that little ones, aged 3 to 5, pick who to trust based on how accurate someone or something has been in the past. This shows how kids are learning to deal with all the information around them, figuring out who and what to believe, whether it’s coming from a person or a robot.
Sources of trust
“Children do not just trust anyone to teach them labels, they trust those who were reliable in the past. We believe that this selectivity in social learning reflects young children’s emerging understanding of what makes a good (reliable) source of information,” the researchers explain. “The question at stake is how young children use their intelligence to decide when to learn and whom to trust.”
In the research, kids from Singapore preschools like ChildFirst, Red SchoolHouse, and Safari House, aged between 3 and 5, were split into two groups: “younger” and “older,” based on whether they were below or above the median age of 4.58 years old.
Each child was paired with either a robot or a person who gave correct or wrong labels to things like “ball” or “book.” The goal was to figure out if the child’s trust in the informant’s ability to label things correctly in the future was influenced by the informant’s identity (human or robot), their track record as a reliable informant, and the child’s age.
Respected teachers
During the study, each child dealt with only one informant, and their trust was measured by how willing they were to accept new information. They used a humanoid social robot called NAO by SoftBank Robotics as the robot informant. To keep things fair, the human informant mimicked the robot’s movements. An experimenter sat with the child to ask questions, making sure the child didn’t feel pressured to agree with the informant.
The findings showed that kids trusted both human and robot informants if they had been accurate before. But if an informant, especially a robot, had made mistakes in the past, kids were less likely to accept new information from them. As for age differences, younger kids were more likely to trust an unreliable human than an unreliable robot. On the other hand, older kids tended to be skeptical and reject information from an unreliable informant, whether human or robot.
“These results implicate that younger and older children may have different selective trust strategies, especially the way they use informants’ reliability and identity cues when deciding who to trust. Together with other research on children’s selective trust, we show that as children get older, they may increasingly rely on reliability cues to guide their trust behavior,” the researchers explain.
Educational implications
This study has important implications for teaching, especially as robots and other non-human tools become more common in classrooms. Right now, kids might not think of robots as trustworthy as people if they haven’t interacted with them much. But as kids get more used to smart machines, they might start seeing robots as smart and reliable sources of information.
Future studies could dig deeper into how kids learn in areas like using tools, understanding emotions, or remembering locations. Right now, the researchers hope that their findings are taken into account when designing ways to teach.
“Designers should consider the impact of perceived competence when building robots and other AI-driven educational tools for young children. Recognizing the developmental changes in children’s trust of humans versus robots can guide the creation of more effective learning environments, ensuring that the use of technologies aligns with children’s developing cognitive and social needs,” the authors conclude.