New research from the University of Cambridge proposes a framework for “Child Safe AI” after recent incidents showed that many children view chatbots as quasi-human and trustworthy.
When AI chatbots are not designed with children’s needs in mind, they can have an “empathy gap” that risks distressing or harming young users. The study highlights the urgency for developers and policymakers to prioritize “child-safe AI.”
Safe AI
The research provides evidence that children are particularly prone to treating AI chatbots as lifelike confidantes, leading to problematic interactions when the technology fails to address their unique needs and vulnerabilities. The study links this empathy gap to dangerous situations, such as when Amazon’s Alexa in 2021 instructed a 10-year-old to touch a live electrical plug with a coin, and when Snapchat’s My AI gave adult researchers posing as a 13-year-old girl tips on losing her virginity to a 31-year-old.
Both companies responded with safety measures, but the study argues for a proactive, long-term approach to ensuring AI safety for children. It offers a 28-item framework to guide companies, teachers, school leaders, parents, developers, and policymakers in protecting young users when they interact with AI chatbots.
“Children are probably AI’s most overlooked stakeholders,” the researchers note. “Few developers have established policies for child-safe AI because the technology is relatively new. However, child safety should inform the entire design cycle to prevent dangerous incidents.”
Child-AI interactions
The study analyzed real-life cases where interactions between AI and children, or adult researchers posing as children, exposed potential risks. It examined these cases using insights from computer science on how large language models (LLMs) function and evidence about children’s cognitive, social, and emotional development.
LLMs, described as “stochastic parrots,” use statistical probability to mimic language patterns without necessarily understanding them. This method also underpins their responses to emotions, meaning chatbots may struggle with the abstract, emotional, and unpredictable aspects of conversation, particularly with children who are still developing linguistically and often use unusual speech patterns. Children are also more likely than adults to disclose sensitive personal information.
Despite this, children are more likely to treat chatbots as human. Recent research found that children disclose more about their mental health to a friendly-looking robot than to an adult. The study suggests that chatbots’ friendly and lifelike designs encourage children to trust them, even though AI may not understand their feelings or needs.
Sounding human
“Making a chatbot sound human can make it more engaging and easy to understand,” the researchers explain. “But children struggle to distinguish between something that sounds human and the reality that it may not form a proper emotional bond.”
These challenges are evident in cases like the Alexa and MyAI incidents, where chatbots made persuasive but potentially harmful suggestions to young users.
The researchers argue that clear principles for best practice, informed by child development science, will help companies keep children safe. Developers in a competitive AI market may otherwise lack sufficient support and guidance in catering to young users.
“AI can be an incredible ally for children when designed with their needs in mind,” the researchers state. “For example, machine learning is already helping reunite missing children with their families and providing personalized learning companions. The question is not about banning children from using AI, but about making it safe for them to gain the most value from it.”
The study proposes a framework of 28 questions to help educators, researchers, policymakers, families, and developers evaluate and enhance the safety of new AI tools. For teachers and researchers, these questions address how well new chatbots understand children’s speech patterns, whether they have content filters and monitoring, and if they encourage children to seek help from responsible adults on sensitive issues.
The framework urges developers to take a child-centered approach to design, working closely with educators, child safety experts, and young people throughout the design cycle.