When your AI chatbot says something nasty about someone, who's responsible? That's the question at the heart of a legal battle between Alphabet Inc. (GOOGL) subsidiary Google and conservative influencer Robby Starbuck, and both sides have very different answers.
The Tech Giant's Defense: User Error, Not Our Problem
Google filed a motion Monday asking a Delaware state court to throw out Starbuck's lawsuit, which alleges the company's AI systems defamed him by generating false and damaging statements about him. But Google isn't backing down quietly.
The company's argument is pretty straightforward: Starbuck deliberately gamed the system. According to Google's filing, he intentionally misused AI chatbots to create what the industry calls "hallucinations"—those moments when AI confidently spits out complete nonsense as if it were fact.
More importantly, Google says Starbuck hasn't provided evidence that anyone actually saw or believed the allegedly defamatory content. In defamation law, that's kind of a big deal. If a chatbot generates something nasty but nobody reads it, did it really damage your reputation?
"This case is fundamentally flawed because it provides no context about how these outputs were generated and fails to name a single person who was actually misled. The Complaint does not provide evidence of political bias, but simply documents Plaintiff's misuse of developer tools to induce hallucinations," the filing stated.
The Influencer's Legal Team Isn't Having It
Starbuck's attorney, Krista Baughman, responded with what you might call controlled fury. She called Google's argument "equal parts rank falsehood and victim blaming," according to Reuters.
"The hubris of publishing life-altering lies about an innocent individual and then blaming the individual for those reckless outputs should be deeply concerning to users of Google's AI," Baughman said.
It's a compelling point. If Google's defense is essentially "he made our AI say bad things on purpose," that raises questions about how easily these tools can be manipulated and what responsibility companies have for their outputs.
A Pattern Emerges
This isn't Starbuck's first dance with AI-generated controversy. The influencer, who's made a name for himself opposing diversity and inclusion initiatives, previously sued Meta Platforms, Inc. (META) over similar AI-generated content. That case settled in August, and Starbuck reportedly went on to advise Meta on AI bias issues. Talk about turning lemons into consulting gigs.
The case highlights a thorny problem for tech companies racing to deploy AI tools: these systems can and do generate false information, and the legal framework for who's liable is still being written in real time, one lawsuit at a time.
Price Action: On Monday, Alphabet Class A shares closed at $285.02, up 3.11% and fell slightly in after-hours trading to $284.45. Class C shares reached $285.60 after gaining 3.11% and decreased 0.22% in after-hours trading.