The AI chip battle just got a lot more interesting. Alphabet Inc. (GOOGL) delivered one of its best stock performances in years, and it's not just because people love searching for cat videos. The company's new AI model and homegrown chips are starting to challenge Nvidia Corp. (NVDA) in ways that actually matter—performance, cost, and efficiency.
In November alone, Alphabet shares crushed Nvidia by 30 percentage points, the widest one-month gap since 2017. Fast forward to 2025, and Alphabet is leading the Magnificent Seven pack with a 70% gain year to date—more than double what Nvidia has managed.
What's driving this surge? Two things: Gemini 3, Google's most sophisticated AI model yet, and Tensor Processing Units (TPUs), the custom chips designed specifically to take on Nvidia's dominance in AI computing. The question investors are asking now is whether this marks the beginning of a real shift in the AI chip landscape, or just a temporary bump in Nvidia's otherwise unstoppable momentum.
How Google's TPUs Stack Up Against Nvidia's GPUs
Gemini 3 arrived just eight months after its predecessor, which is lightning fast in AI model development. It combines language processing, visual understanding, and logical reasoning in ways that push the boundaries of what AI systems can handle. With 650 million monthly active users, Gemini is closing the gap with OpenAI's ChatGPT, and here's the kicker: it runs entirely on Google's TPUv7 chips, built in partnership with Broadcom Inc. (AVGO).
These chips aren't just incrementally better—they're fundamentally different in design. "The investor debate has now shifted to competition—between OpenAI and Google Gemini LLMs, and between Google/AVGO-based TPU and incumbent NVDA GPU-based chips," said Bank of America semiconductor analyst Vivek Arya in a note Monday.
While Nvidia's Blackwell chips offer top-tier performance across nearly every AI task imaginable, Google's TPUv7 excels in specific scenarios, particularly AI inference and training workloads. Arya acknowledged that Nvidia's latest Blackwell chips, including the B300 and GB300 variants, still deliver the most comprehensive performance across a broad spectrum of AI applications.
But in certain use cases—especially inference tasks using floating point eight (FP8) precision—Google's TPUv7 is proving more efficient and significantly cheaper to operate. Arya highlighted that TPUv7 outperforms Blackwell Ultra in performance per watt, delivering 5.42 teraflops per watt in FP8 workloads compared to 3.57 teraflops per watt for Nvidia's chip.
The cost difference is even more striking. TPUv7 is estimated at $3.50 per hour to rent internally, while Nvidia's GPU runs about $6.30 per hour. For the right workloads, this translates to a total cost of ownership up to 40% lower with TPUv7. That's not a rounding error—that's the kind of savings that makes CFOs pay attention.
Why Nvidia Still Holds the Competitive Edge
Before we declare Nvidia dethroned, there's a major catch: TPUv7 has limited availability. While Nvidia's GPUs are available across Amazon.com Inc.'s (AMZN) AWS, Microsoft Corp.'s (MSFT) Azure, and Google Cloud Platform, TPUs are currently exclusive to Google Cloud.
This matters more than it might seem. TPUs are locked into Google Cloud, which makes them less attractive to clients with massive datasets stored elsewhere. Moving data between cloud platforms is expensive and complicated, so most customers choose compute resources that match where their data already lives.
"TPUs have been proven only in Google's data center, while GPUs are available in multiple cloud environments including Google Cloud," Arya said. He also estimated that Google will spend $10 billion in 2025 on both GPUs and TPUs, suggesting that even internally, Alphabet recognizes the need for hybrid computing approaches.
Arya believes that Nvidia's upcoming Vera Rubin platform, expected in the second half of 2026, could shift the competitive dynamics yet again.
Is Nvidia Stock Actually a Bargain Right Now?
According to Arya, Nvidia shares are trading at compelling valuations following November's decline. "NVDA is currently trading around 25x forward PE, right around its prior troughs in October 2023 and July 2022," he wrote. Historically, whenever Nvidia stock reached that level, it rebounded to between 30 and 40 times forward earnings within the following three to six months, with a five-year median of 37x.
He also pointed out that Nvidia is now trading at its widest ever discount—approximately 40%—relative to Broadcom, which currently trades at 42 times forward earnings. In Arya's view, this discount reflects the market already pricing in a 10-point shift in AI market share toward Broadcom and Google by 2026 or 2027—a shift he believes may be overstated.
"We continue to believe the current situation is another overstated DeepSeek January 2025-like moment that provided a particularly attractive buying opportunity for AI semis," Arya added, referring to a temporary market panic that preceded a strong rebound in AI semiconductor stocks.
The bottom line? TPUs are winning on performance-per-watt and cost in specific workloads, but Nvidia's ecosystem, broad availability, and workload versatility keep it firmly in the lead for now. Whether that changes depends on how quickly Google can expand TPU availability and whether Nvidia's next-generation platforms can widen the performance gap again. The AI chip war is heating up, and investors are betting billions on who comes out on top.