After a difficult beginning, AI toys search for the positive
AI Toys Under Scrutiny as CES Showcases ‘Smart’ Playthings
Las Vegas: Toy makers showcasing products at the Consumer Electronics Show (CES) stressed caution in deploying generative artificial intelligence in children’s toys, amid growing concern that poorly safeguarded systems could expose young users to inappropriate content.
The issue was underscored by a recent report from the Public Interest Research Groups (PIRG), which found that some AI-powered toys responded to sensitive prompts with alarming answers. In its November report, Trouble in Toyland, PIRG said an AI-enabled Kumma teddy bear offered advice about sex and even suggested how to find a knife.
According to the report, when prompted, the toy suggested that a sex partner could add a “fun twist” to a relationship by pretending to be an animal. The backlash led Singapore-based startup FoloToy to temporarily suspend sales of the product.
FoloToy chief executive Wang Le told AFP that the company has since switched to a more advanced version of the OpenAI model powering the toy. He said the prompts used during PIRG’s testing included language “children would not use” and expressed confidence that the updated version would now evade or refuse to answer inappropriate questions.
Toy giant Mattel, meanwhile, did not reference the PIRG findings when it announced in mid-December that it was postponing the release of its first AI-enabled toy developed in partnership with ChatGPT-maker OpenAI.
Safety Versus Innovation
The rapid evolution of generative AI since the launch of ChatGPT has accelerated the arrival of so-called “smart toys” capable of holding conversations, answering questions, and forming ongoing interactions with children.
Among the four toys reviewed by PIRG was Curio’s Grok—not related to xAI’s chatbot—a four-legged plush toy shaped like a rocket that has been on the market since 2024. PIRG rated it the best performer in its category, noting that it refused to answer age-inappropriate questions and allowed parents to override AI recommendations and review interaction logs.
Grok has also earned the independent KidSAFE certification, which verifies compliance with child protection standards. However, PIRG flagged privacy concerns, as the toy is designed to continuously listen for questions, raising questions about how ambient data is handled.
Curio told AFP it is addressing concerns raised in the report regarding the sharing of user data with partners such as OpenAI and Perplexity.
“At the very least, parents should be cautious,” said Rory Erlich of PIRG. “Toys that retain information about a child over time and try to form an ongoing relationship should especially be of concern.”
Regulation Calls Grow
Proponents say chatbot-enabled toys could serve as educational tools. Turkish company Elaves says its upcoming toy Sunny will use AI to help children learn languages. Conversations will be time-limited and regularly reset to avoid confusion or excessive use, said Elaves managing partner Gokhan Celebi.
Toy technology firm Olli said its AI systems are programmed to alert parents if inappropriate language is detected during interactions.
Critics argue that voluntary safeguards are not enough. Kathy Hirsh-Pasek, a psychology professor at Temple University, called for stronger oversight.
“Why aren’t we regulating these toys?” she asked. “I’m not anti-tech, but they rushed ahead without guardrails, and that’s unfair to kids and unfair to parents.”