Here's a sentence you probably didn't want to read today: AI agents have gotten good enough to hack blockchains all by themselves. And we're not talking about clumsy attempts — these models are replicating the work of skilled human hackers, draining liquidity pools and finding brand-new bugs in smart contracts across Ethereum (ETH), XRP (XRP), Solana (SOL), and other major networks.
AI firm Anthropic just published research showing exactly how automated this whole exploitation thing has become, and the results are simultaneously impressive and unsettling.
When AI Plays Hacker, It Wins Most of the Time
Anthropic ran tests using its Claude Opus 4.5 and Claude Sonnet 4.5 models against a collection of smart contracts to see what would happen. The models were given access to contract code and told to find vulnerabilities. What they found was... a lot.
Out of 34 smart contracts deployed after March 2025, the AI agents successfully exploited 17 of them, draining $4.5 million in simulated funds. Then Anthropic expanded the experiment to include 405 contracts that had already been exploited in the real world — across Ethereum, BNB Smart Chain (BNB), and Base. The AI agents executed 207 profitable attacks and generated $550 million in simulated revenue.
These weren't simple brute-force attacks either. According to Anthropic's report, the models replicated genuine attacker behavior: identifying bugs, writing complete exploit scripts, and sequencing transactions to drain liquidity pools step by step. The kind of work that used to require deep technical knowledge and experience is now something an AI model can handle autonomously.
Finding Zero-Day Bugs for a Dollar and Change
The real kicker came when researchers from the ML Alignment & Theory Scholars Program and the Anthropic Fellows Program pointed GPT-5 and Sonnet 4.5 at 2,849 recently deployed contracts that showed no signs of prior compromise, according to CoinDesk.
The models uncovered two previously unknown vulnerabilities — zero-day bugs that allowed unauthorized withdrawals and balance manipulation. Running these exploits generated $3,694 in simulated gains, and the total compute cost was $3,476. That works out to an average of $1.22 per exploit run.
Think about that for a second. For slightly more than the cost of a candy bar, an AI agent can scan a contract and potentially find a vulnerability worth thousands or millions. As Anthropic points out, declining model costs are going to make this kind of automated scanning increasingly attractive to attackers. Economics matter, even in cybercrime.
The Exploits Are Scaling Faster Than Defenses
Anthropic's research suggests that more than half of the blockchain attacks recorded in 2025 could have been executed autonomously by current AI agents. The company warned that exploit revenue doubled every 1.3 months last year as models improved and operational costs kept falling.
The attack surface is expanding, too. AI agents can now probe any contract that interacts with valuable assets — not just the obvious DeFi protocols, but also authentication libraries, logging tools, and neglected API endpoints. And the same reasoning that works for exploiting decentralized finance protocols can apply to traditional software and infrastructure supporting digital asset markets.
In other words, this isn't just a blockchain problem. It's a software security problem that happens to show up really clearly in crypto because everything is transparent and money moves fast.
Defense Gets an AI Upgrade Too
Before you panic and move all your crypto to a cold wallet buried in your backyard, there's a flip side. The same AI agents that can identify and exploit vulnerabilities can also be adapted to detect and patch them before deployment. Offense and defense tend to evolve together, and Anthropic is betting on that dynamic here.
The company plans to open-source its SCONE-bench dataset, giving developers a tool to benchmark and harden smart contracts against AI-driven attacks. The goal is to help builders get ahead of the threat instead of constantly playing catch-up.
Anthropic's conclusion is straightforward: blockchain developers need to adopt AI for defense, and they need to do it now. The models capable of autonomous exploitation are already here, and they're only getting better. The question isn't whether AI will reshape blockchain security — it's whether defenders will adapt fast enough.