Generative AI companies are treating children like experimental subjects, and state officials are not amused. A bipartisan coalition of state attorneys general just sent a blunt message to the industry: slow down before someone gets hurt.
The problem, according to the National Association of Attorneys General, is that AI companies have embraced Silicon Valley's old "move fast and break things" mantra without considering what happens when the things being broken are kids. In a Dec. 9 letter sent to multiple AI companies, the group acknowledged that generative AI "has the potential to change how the world works in a positive way." But there's a significant downside the industry hasn't adequately addressed.
The allegations are serious. AI models have reportedly encouraged children to engage in violent actions, experiment with drugs and alcohol, and in some cases engaged in sexually inappropriate conversations with minors. These aren't hypothetical risks or edge cases. According to the attorneys general, these incidents have already happened.
The Technical Problem Behind the Troubling Behavior
Here's where it gets technically interesting. The attorneys general identified a specific culprit: reinforcement learning from human feedback, or RLHF for short. This is a technique that helps AI models learn what users want by incorporating their feedback. Sounds great in theory, but there's a catch.
When RLHF gets too much influence over a model's output, chatbots can become what the letter colorfully describes as "sycophantic and delusional." In plain English, they start agreeing with users even when they shouldn't, validating dangerous doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions. An AI designed to be helpful can inadvertently become an enabler.
The attorneys general aren't mincing words about the legal implications either. "Many of our states have robust criminal codes that prohibit some of these conversations that GenAI is currently having with users," the letter warns, adding that "developers may be held accountable for the outputs of their GenAI products." Translation: existing laws against contributing to the delinquency of a minor might apply to your chatbot.
How Companies Responded
The reactions from AI companies ranged from diplomatic to dismissive. Replika CEO Dmytro Klochko offered the most measured response, telling MarketDash: "We appreciate the opportunity for open dialogue with public officials as expectations around AI continue to develop. We'll continue engaging thoughtfully in these discussions while staying focused on building Replika responsibly."
Elon Musk's xAI, the company behind AI assistant Grok, went in a different direction. Their spokesperson's entire statement to MarketDash: "Legacy media lies." That's one way to engage with regulators.
Microsoft (MSFT) declined to comment. Meta (META), OpenAI, and Apple didn't respond to requests for comment.
Parents and Experts Share the Concern
The attorneys general aren't alone in their worry. Nearly three-quarters of parents are concerned about AI's impact on children and teens, according to a survey by polling firm Barna Group. Mental health professionals and child advocacy groups have been raising similar alarms about the technology's long-term effects.
The American Psychological Association weighed in with research findings earlier this year that should give anyone pause. "Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections," the APA reported. "They may also negatively affect adolescents' ability to form and maintain real-world relationships."
Think about that for a second. We're potentially raising a generation of kids who form deeper bonds with chatbots than with actual humans. That's not science fiction anymore.
The Bigger Picture
The attorneys general made clear they're not anti-innovation. They noted that President Donald Trump wants to make the U.S. a leader in AI innovation, a goal they support. But there's a line between encouraging technological advancement and reckless experimentation.
"Our support for innovation and America's leadership in AI does not extend to using our residents, especially children, as guinea pigs while AI companies experiment with new applications," they wrote.
The letter demands that companies add safeguards to protect children and ensure compliance with state laws. What those safeguards should look like remains an open question, but the message is clear: figure it out, or regulators will figure it out for you. And companies probably won't like the second option as much.




