Nvidia Corp. (NVDA) has a $500 billion problem, if you can call it that. The problem isn't whether demand for AI chips will hit that eye-watering number. It's whether the company should bother updating investors every time that number gets bigger.
On Tuesday, CEO Jensen Huang told CNBC's tech reporter Kif Leswing that he doesn't plan to revise the $500 billion AI demand visibility figure on a quarterly basis, according to a post from CNBC correspondent Kristina Partsinevelos. The number reflects what Nvidia can already see across its order books for 2025 and 2026, spanning Blackwell GPUs, next-generation Vera Rubin chips, and all the systems and networking hardware that goes with them.
Here's the thing though: Huang also said "many new developments should increase our expectation," basically winking at the upside without committing to a new forecast. So the $500 billion is real, it's just not the ceiling.
The Number Has Already Grown, CFO Confirms
If you're wondering whether that upside is actually materializing or just CEO optimism, Nvidia's CFO Colette Kress offered some clarity. According to another post from Partsinevelos, Kress said the $500 billion visibility has expanded since the company's GTC conference back in October.
"So yes, that $500 billion has definitely gotten larger," Kress said. She pointed to early orders for Vera Rubin as a key driver, with customers already planning full-year volumes for a chip platform that hasn't even launched yet.
Those customers include the usual suspects: major cloud service providers, AI model developers, and neoclouds. What's notable is that demand for next-generation platforms is getting locked in well before launch, suggesting customers are trying to secure supply early rather than risk getting stuck in a queue later.
Open-Source AI Is Fueling More Demand, Not Less
One of the more interesting wrinkles in Nvidia's demand story is the rise of open-source AI models. Huang highlighted models like DeepSeek, Qwen, and Meta Platforms, Inc.'s (META) Llama, which now account for roughly one in four tokens generated across the AI ecosystem.
You might think open models would reduce the need for expensive GPUs, since they're supposed to democratize AI and make it cheaper to run. But Huang's view is the opposite: the spread of open models has expanded overall usage, which means more compute demand, not less. More developers building with AI means more tokens being processed, which means more chips needed.
That trend is being reinforced by big strategic moves. In November 2025, Microsoft Corp (MSFT) and Nvidia announced a strategic collaboration with Anthropic that included a combined $15 billion investment in the AI startup. That kind of partnership isn't just about backing a promising company. It's about ensuring long-term demand for the infrastructure that powers it.
Analysts Are Revising 2026 Expectations Higher
Huang first dropped the $500 billion figure at GTC in October, describing it as business already on the books for 2025 and 2026. Analysts have since concluded that the comments imply a significantly stronger revenue outlook for 2026 than previously modeled, driven largely by demand for the Rubin generation of chips.
The fact that Kress is now saying the number has grown, just a few months later, suggests that the demand environment is accelerating faster than even Nvidia expected. Whether or not the company updates the figure regularly, the trajectory seems pretty clear.
Stock Movement
Nvidia shares slipped 0.47% during Tuesday's regular trading session before ticking up 0.49% in after-hours trading.




