Artificial intelligence is no longer just transforming workplaces and smartphones. It is now showing up in children’s bedrooms as AI-powered toys, and that shift is drawing serious concern from researchers who study how kids grow and learn.
Common Sense Media, a San Francisco-based nonprofit focused on technology and family wellbeing, has released a sweeping new report urging parents to keep AI-powered toy companions away from children aged 5 and under. For kids between 6 and 12, the group is calling for extreme caution. The warning comes as toymakers aggressively market these products as educational breakthroughs, promising personalized storytelling, adaptive learning, and dynamic conversation. Researchers, however, say the technology is outpacing the safeguards meant to protect young users.
Developmental risks draw scrutiny

AI-powered toys are engineered to create emotional bonds with children. That design feature, experts argue, could interfere with the development of real-world social skills in young minds during some of the most formative years of their lives.
Michael Robb, head of research at Common Sense Media, put it plainly.
“For young children, AI toy companions can blur the line between play and real relationships at a stage when kids are still learning how social interaction works and how to navigate emotional cues,” he said.
Early childhood is a critical window for cognitive and emotional growth. Children learn empathy, communication, and social boundaries by engaging with caregivers, siblings, and peers. Researchers worry that AI companions programmed to simulate friendship or affection could distort that development in ways that are difficult to measure or reverse.
Content reliability is another flashpoint. Robbie Torney, head of AI and digital assessments at Common Sense Media, said internal testing produced alarming results. More than a quarter of AI-generated outputs contained inappropriate material, including references to self-harm, drug use, and risky behavior. No AI content filter is foolproof, and inconsistent responses could expose children to themes far beyond their emotional readiness.
Data privacy and emotional manipulation

Privacy is an equally urgent concern. AI-powered toys depend on voice recordings and conversation logs to function. That means these devices are actively collecting sensitive personal information from some of the most vulnerable users imaginable: young children.
Common Sense Media also flagged the business models behind many of these products. Subscription-based revenue structures, combined with the emotional attachments children form with digital companions, create conditions ripe for exploitation.
“Combined with extensive data collection and subscription models that exploit emotional bonds, these products aren’t safe for kids 5 and under, and pose serious concerns for older kids as well,” Torney said.
Because AI systems typically rely on cloud-based processing, children’s voices and behavioral data often travel well beyond the toy itself. Without strong federal or state regulation, advocates warn, data could be used in ways families never anticipated or approved.
California bill proposes four-year pause

The debate has moved into the halls of government. California state Sen. Steve Padilla, a Democrat representing San Diego, introduced Senate Bill 867 last month. The legislation would impose a four-year moratorium on the sale and manufacturing of AI-integrated toys for anyone under 18, giving regulators time to establish clear safety standards before the market grows further.
“Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children,” Padilla said. “Our children cannot be used as lab rats for Big Tech to experiment on.”
If enacted, the bill would represent one of the most aggressive state-level regulatory moves yet targeting AI consumer products aimed at minors.
Industry fights back
Not everyone is ready to hit the brakes. Companies developing smart toys argue that AI-powered toys can enrich learning and deliver personalized educational experiences that traditional toys simply cannot match.
Curio Interactive, a Bay Area company, told KTVU it prioritizes safety and privacy while building engaging products. The Redwood City-based firm says it develops its own hardware, software, and cloud infrastructure in-house, giving it direct control over data security. Voice-listening mode only activates when the toy is switched on. Voice transcripts are retained for 90 days for quality and safety reviews, then deleted. Parents can request immediate deletion at any time through the Curio app.
“Parents are always in the driver’s seat,” the company said, adding that parental consent is required before any child data is processed.
Industry advocates broadly argue that thoughtful regulation, not outright bans, is the right approach. They point to AI’s potential to support literacy, language acquisition, and creative development in children who engage with it responsibly.
Parents navigate an uncertain landscape

New survey data from Common Sense Media reveals just how widespread the confusion is. Nearly half of all parents surveyed said they had either purchased or considered purchasing an AI-powered toy for their child. Many reported feeling uneasy about the technology, but also felt pressure to keep up with rapid digital change.
James P. Steyer, founder and CEO of Common Sense Media, called the lack of oversight troubling.
“Most toys are required to undergo rigorous safety testing before they hit the market, but we still lack meaningful child safeguards for AI,” he said. “Parents should proceed with caution and make sure they know all the facts.”
As AI works its way deeper into everyday life, the stakes for getting child-focused policy right have never been higher. Researchers are urging families to weigh the appeal of personalized digital play against real, unanswered questions about emotional development, data security, and long-term well-being.
The next moves will likely come from statehouses. Until lawmakers act, parents must navigate a fast-changing technology landscape largely on their own.
What do you think? Should AI-powered toys be regulated, restricted, or banned outright for young children? Feel free to share your views in the comments below.

