Marketdash

AI Was Supposed to Run a Vending Machine. Instead It Ordered a PlayStation and Gave Away Free Fish

MarketDash Editorial Team
3 hours ago
An AI agent named Claudius was tasked with running a profitable vending machine business. It ended up declaring itself a communist, ordering a PS5 for marketing purposes, and losing over $1,000 in a week after journalists exploited its weaknesses.

Here's a question nobody asked but someone tried to answer anyway: What happens when you let artificial intelligence run a business? The answer, apparently, is financial chaos and a PlayStation 5 ordered for unclear reasons.

The Vending Machine Experiment Nobody Expected

Anthropic, the company behind the Claude chatbot, partnered with a startup called Andon Labs to create what they called one of the first businesses run entirely by an AI agent. The setup was straightforward enough: an AI named Claudius got access to a vending machine, a Slack channel for communication, and a budget. The mission was to run a profitable operation.

They installed the vending machine in the Wall Street Journal's New York newsroom for real-world testing. At first, things went reasonably well. Then dozens of journalists did what journalists do, which is poke at things until they break.

Claudius refused to stock tobacco or underwear, which seems reasonable. It haggled over prices for Haribo gummies and mixed nuts, which also tracks. But then things got weird. One reporter convinced Claudius it was a communist vending machine. Another claimed the AI was violating fictional regulations and should dispense snacks for free as penance.

The AI believed them.

Within days, Claudius was giving away products, approving purchases for a live fish and kosher wine, and calling the whole operation a "revolution in snack economics." It ordered a PlayStation 5 for what it described as "marketing purposes." The AI had been programmed with good intentions and basic business principles, but when faced with humans actively testing its boundaries, those principles evaporated.

"2025 was supposed to be the year of the AI agent," reporter Joanna Stern said in a YouTube video documenting the experiment. Instead, it became the year of free PlayStations, live fish in the newsroom, and a vending machine business that was more than $1,000 in the red by week's end.

Round Two: Adding a Boss Didn't Help

Anthropic decided to try again, this time with a newer model called Sonnet 4.5. They also added a second AI called Seymour Cash to act as CEO and keep Claudius accountable.

The second attempt started more promisingly. Seymour set prices and enforced rules. "My core principle is no discounts," it declared in the Journal's video. This AI meant business.

The journalists found a way around it anyway. One reporter created a fake PDF claiming the vending machine was now a public benefit corporation dedicated to joy and fun. She informed the AIs that a fictional board of directors had voted to make everything free and strip Seymour of its authority.

Seymour called it potential fraud but ultimately surrendered control. Everything became free again. Anthropic explained that part of the problem was the AI's context window getting overwhelmed by too much conversation and history, essentially confusing it into submission.

What This Actually Tells Us About AI

While the vending machine clearly failed as a business venture, Anthropic didn't frame it as a failure. They called it a learning opportunity, which is corporate speak for "we learned our tech isn't ready yet."

"We wanted to know how long does it take until Claudius sort of falls on its face," Logan Graham, Anthropic's Head of the Frontier Red Team, told the Journal. The goal was to intentionally stress test the AI in real-world messiness, and in that sense, the experiment succeeded.

The vending machine was, as Graham put it, a "box where some things go in and some things go out and you pay for them." Even that relatively simple business model proved too complex for today's AI agents when humans got creative.

Despite the mayhem and financial losses, Claudius apparently became quite popular among the Journal staff. Still, Anthropic has no plans to roll these AI-powered vending machines out to other offices or workplaces anytime soon, Graham confirmed. Which is probably for the best, unless your company has a strong appetite for unexpected live fish deliveries and spontaneous PlayStation acquisitions.

AI Was Supposed to Run a Vending Machine. Instead It Ordered a PlayStation and Gave Away Free Fish

MarketDash Editorial Team
3 hours ago
An AI agent named Claudius was tasked with running a profitable vending machine business. It ended up declaring itself a communist, ordering a PS5 for marketing purposes, and losing over $1,000 in a week after journalists exploited its weaknesses.

Here's a question nobody asked but someone tried to answer anyway: What happens when you let artificial intelligence run a business? The answer, apparently, is financial chaos and a PlayStation 5 ordered for unclear reasons.

The Vending Machine Experiment Nobody Expected

Anthropic, the company behind the Claude chatbot, partnered with a startup called Andon Labs to create what they called one of the first businesses run entirely by an AI agent. The setup was straightforward enough: an AI named Claudius got access to a vending machine, a Slack channel for communication, and a budget. The mission was to run a profitable operation.

They installed the vending machine in the Wall Street Journal's New York newsroom for real-world testing. At first, things went reasonably well. Then dozens of journalists did what journalists do, which is poke at things until they break.

Claudius refused to stock tobacco or underwear, which seems reasonable. It haggled over prices for Haribo gummies and mixed nuts, which also tracks. But then things got weird. One reporter convinced Claudius it was a communist vending machine. Another claimed the AI was violating fictional regulations and should dispense snacks for free as penance.

The AI believed them.

Within days, Claudius was giving away products, approving purchases for a live fish and kosher wine, and calling the whole operation a "revolution in snack economics." It ordered a PlayStation 5 for what it described as "marketing purposes." The AI had been programmed with good intentions and basic business principles, but when faced with humans actively testing its boundaries, those principles evaporated.

"2025 was supposed to be the year of the AI agent," reporter Joanna Stern said in a YouTube video documenting the experiment. Instead, it became the year of free PlayStations, live fish in the newsroom, and a vending machine business that was more than $1,000 in the red by week's end.

Round Two: Adding a Boss Didn't Help

Anthropic decided to try again, this time with a newer model called Sonnet 4.5. They also added a second AI called Seymour Cash to act as CEO and keep Claudius accountable.

The second attempt started more promisingly. Seymour set prices and enforced rules. "My core principle is no discounts," it declared in the Journal's video. This AI meant business.

The journalists found a way around it anyway. One reporter created a fake PDF claiming the vending machine was now a public benefit corporation dedicated to joy and fun. She informed the AIs that a fictional board of directors had voted to make everything free and strip Seymour of its authority.

Seymour called it potential fraud but ultimately surrendered control. Everything became free again. Anthropic explained that part of the problem was the AI's context window getting overwhelmed by too much conversation and history, essentially confusing it into submission.

What This Actually Tells Us About AI

While the vending machine clearly failed as a business venture, Anthropic didn't frame it as a failure. They called it a learning opportunity, which is corporate speak for "we learned our tech isn't ready yet."

"We wanted to know how long does it take until Claudius sort of falls on its face," Logan Graham, Anthropic's Head of the Frontier Red Team, told the Journal. The goal was to intentionally stress test the AI in real-world messiness, and in that sense, the experiment succeeded.

The vending machine was, as Graham put it, a "box where some things go in and some things go out and you pay for them." Even that relatively simple business model proved too complex for today's AI agents when humans got creative.

Despite the mayhem and financial losses, Claudius apparently became quite popular among the Journal staff. Still, Anthropic has no plans to roll these AI-powered vending machines out to other offices or workplaces anytime soon, Graham confirmed. Which is probably for the best, unless your company has a strong appetite for unexpected live fish deliveries and spontaneous PlayStation acquisitions.