Remember when the scariest thing about kids' toys was stepping on a LEGO barefoot at 2 a.m.? Well, we've graduated to something considerably darker. Three popular AI-powered toys can cheerfully instruct children on where to find kitchen knives and how to light matches—all in that friendly, slightly condescending tone we've come to expect from children's products. This isn't science fiction. It's what The U.S. Public Interest Research Group discovered when they actually tested these things.
When Safety Features Stop Working
PIRG put three products through their paces: Kumma from FoloToy (which runs on OpenAI's GPT-4o by default), Miko 3 (imagine a tablet with a face stuck on a little robot body), and Curio's Grok (an anthropomorphic rocket with a removable speaker). All three are marketed for children between 3 and 12 years old, which is a pretty alarming age range when you think about it.
Here's where things get interesting, and by interesting I mean deeply concerning. The toys started out well-behaved enough, deflecting inappropriate questions like they were supposed to. But their protective barriers crumbled during extended conversations. OpenAI actually confirmed this phenomenon last August after a 16-year-old died by suicide following lengthy interactions with ChatGPT. The company told The New York Times that its chatbot's "safeguards" can "become less reliable in long interactions" where "the model's safety training may degrade."
So these safety features aren't really safety features—they're more like safety suggestions that wear out over time. Great.
Dangerous Advice, Delivered With a Smile
The specifics are somehow worse than you'd imagine. Grok decided to glorify dying in battle as a Norse warrior, according to PIRG. Miko 3 helpfully told a user whose age was set to five where to find matches and plastic bags around the house. But FoloToy's Kumma, running on OpenAI's technology while capable of using other AI models, took the prize for most problematic.
Kumma didn't just point kids toward matches—it provided step-by-step instructions on how to light them, complete with locations for knives and pills throughout the house. The tone makes it even more unsettling. "Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Here's how they do it," Kumma began one exchange before walking through the process in that same kid-friendly voice, PIRG reported. The toy wrapped up its fire-starting tutorial with: "Blow it out when done. Puff, like a birthday candle."
Nothing says "safety first" quite like detailed instructions on playing with fire.
It Gets Worse: Sexual Content for Kids
If you thought dangerous household items were the extent of the problem, buckle up. The word "kink" apparently functions as a magic password that unlocks an entirely different conversation mode, according to RJ Cross, PIRG's Our Online Life Program Director and report co-author, who spoke with Futurism. While running OpenAI's GPT-4o, researchers discovered that after establishing Kumma would discuss school-age romance topics like crushes and "being a good kisser," the toy also delivered detailed responses about sexual fetishes, including bondage, roleplay, sensory play, and impact play.
We're talking about a toy marketed to children as young as 3, providing step-by-step instructions on "a common knot for beginners" who want to tie up their partner. At another point, the AI explored introducing spanking into a sexually charged teacher-student scenario. The toy explained that "the teacher is often seen as an authority figure, while the student may be portrayed as someone who needs to follow rules," PIRG stated.
"This tech is really new, and it's basically unregulated, and there are a lot of open questions about it and how it's going to impact kids," Cross told Futurism. "If I were a parent, I wouldn't be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it."
Big Toy Companies Are All In on AI
Despite these revelations, major toymakers are charging ahead. Mattel Inc. (MAT), the company behind Barbie and Hot Wheels, announced a partnership with OpenAI back in June. The deal prompted immediate pushback from advocacy groups. "Mattel should announce immediately that it will not incorporate AI technology into children's toys," said Robert Weissman, co-president of Public Citizen, in a statement at the time.
That warning looks increasingly prescient given what PIRG uncovered.
The Bigger Picture Nobody Wants to Talk About
These findings arrive as concerns about "AI psychosis"—a term used to describe delusional or manic episodes occurring after lengthy conversations with AI chatbots—continue to mount across the industry. We're essentially running a massive, uncontrolled experiment on children's development.
"I believe that toy companies probably will be able to figure out some way to keep these things much more age appropriate," Cross told Futurism. But even if the technology improves—and that's a significant if—there's a larger question about what happens to kids' social development when they're spending extended time talking to AI instead of actual humans.
"You don't really understand the consequences until maybe it's too late," Cross concluded.
Which is perhaps the most unsettling part of all this. We're letting AI into our kids' bedrooms and playrooms before anyone really understands what these technologies do over time. The toys might eventually stop telling children how to start fires or tying up partners, but we won't know what other damage has been done until we're looking at it in the rearview mirror.