If you thought Nvidia Corp. (NVDA) was done making headlines after Blackwell, think again. At CES 2026, CEO Jensen Huang took the stage in his signature leather jacket (shinier than usual, apparently) to announce that the company's next-generation AI supercomputing platform, Vera Rubin, is already in full production. That's right—while Blackwell is still ramping up, Nvidia is already shipping its successor.
The Computing Industry Is Going Through Two Shifts at Once
Huang opened his keynote at a packed Fontainebleau Las Vegas venue by explaining that the computing industry typically undergoes a major transformation every 10 to 15 years. This time, though, something unusual is happening: two fundamental shifts are occurring simultaneously. First, applications are increasingly being built directly on AI rather than traditional software frameworks. Second, the actual process of creating software is being completely redefined.
After greeting the crowd with New Year's wishes, Huang dove into Nvidia's progress in scaling AI beyond simple chatbots. The company is pushing toward agentic systems that can plan and reason, teaching AI the laws of physics, and expanding into entirely new domains.
Beyond Chatbots: AI That Plans, Reasons, and Acts
Huang traced AI's journey from early neural networks through transformers to today's large language models. But he made it clear that the next phase goes well beyond text generation. The focus now is on agentic AI—systems capable of autonomous planning, reasoning, and action over extended periods.
"Large language models isn't the only form of information," Huang explained. "Wherever the universe has information, wherever the universe has structure, we could teach a large language model."
That philosophy extends to what Nvidia calls physical AI: systems trained to understand and interact with the real world using actual physics. These aren't chatbots that know about physics in an abstract sense—they're AI systems that comprehend how objects move, collide, and behave in three-dimensional space.
Open-Source Models Are Closing the Gap Fast
One of the keynote's major themes was Nvidia's commitment to open AI ecosystems. According to Huang, open-source models are now only about six months behind proprietary frontier models, and that gap continues to shrink. Around 80% of startups are building on open models, and a significant portion of AI usage across developer platforms now relies on open-source systems.
Nvidia isn't just releasing models, either. The company is open-sourcing the data, training tools, evaluation frameworks, and deployment systems that make those models work. It's a bet that the open ecosystem will drive faster innovation and broader adoption than closed, proprietary alternatives.
Nvidia's own open models are already competing at the top of leaderboards in areas like optical character recognition, PDF comprehension, and natural language search. The performance gap between open and closed systems is narrowing faster than many expected.
Physical AI for Robots and Self-Driving Cars
Nvidia showcased its Cosmos world foundation model, which generates realistic simulations and synthetic data to train robots and autonomous systems. Huang revealed that Nvidia uses Cosmos internally for its own self-driving vehicle development.
The company also unveiled Alpamayo, an open-source reasoning and decision-making AI designed specifically for autonomous driving. The system allows vehicles to learn from limited real-world data and handle unfamiliar scenarios—critical capabilities for self-driving technology that needs to work safely in unpredictable environments.
Robots on Stage and Manufacturing Partnerships
In a memorable moment during the keynote, fully autonomous droids from Star Wars—BDX models powered by Nvidia Cosmos—rolled onto the stage. Huang clearly enjoyed the back-and-forth interaction with the robots, demonstrating the kind of autonomous, responsive behavior that physical AI enables.
Beyond the stage theatrics, Nvidia announced a partnership with Siemens that signals a significant push into manufacturing. The collaboration will use Nvidia's physical AI trained on synthetic data from digital factory twins to develop next-generation robotics for industrial applications. It's one thing to have a robot navigate a warehouse; it's another to have it operate safely and efficiently in a complex manufacturing environment with constantly changing conditions.
Vera Rubin: Five Times Faster, Five Minutes to Assemble
Now for the main event. Huang confirmed that Vera Rubin, Nvidia's successor to Blackwell, is in full production. The system delivers up to five times the performance of Blackwell while improving efficiency, memory bandwidth, and interconnect speeds.
Vera Rubin represents what Nvidia calls an "extreme-codesigned, six-chip AI platform"—the first of its kind for the company. It integrates advanced GPUs, custom CPUs, high-speed networking via ConnectX-9 Spectrum-X SuperNIC, and full-stack encryption. The platform is designed to address what Huang identified as AI's next major bottleneck: context and data movement.
Here's where things get interesting from an engineering perspective. Vera Rubin can be assembled in just five minutes, compared to roughly two hours for previous systems. The entire platform is water-cooled, but not with the cold water you might expect. Instead, it uses hot water at around 45°C—a counterintuitive approach that Huang said significantly reduces energy costs.
An NVLink 6 Switch enables all GPUs within the Vera Rubin system to communicate simultaneously, eliminating communication bottlenecks that can slow down distributed AI training and inference.
Named After a Pioneering Astronomer
At the start of the Vera Rubin presentation, Huang paid tribute to astronomer Vera Rubin, who observed that the outer edges of galaxies rotated nearly as fast as their centers. That observation led to the discovery of dark matter, one of the most important breakthroughs in modern physics. Huang said Nvidia chose to name its next computer platform in her honor, continuing the company's tradition of naming architectures after scientists.
What This Means for Nvidia
The rapid succession from Blackwell to Vera Rubin signals that Nvidia isn't slowing down its product cadence despite its dominant position in AI computing. The company is betting that AI workloads will continue to demand exponentially more computing power, and that the competitive moat comes from staying multiple generations ahead.
The emphasis on open-source models and physical AI also suggests Nvidia sees its future not just in selling chips for chatbots, but in powering the next wave of autonomous systems—robots, self-driving vehicles, and intelligent manufacturing. These applications require different capabilities than large language models, including real-time decision-making, physical simulation, and safety-critical reliability.
The Siemens partnership is particularly notable because it brings Nvidia's technology directly into industrial settings where reliability and integration with existing systems matter enormously. Manufacturing is a massive market, and if Nvidia can become the standard platform for AI-powered robotics in factories, that's a significant new revenue stream beyond hyperscale data centers.
Price Action: Nvidia (NVDA) shares closed down 0.39% during Monday's regular session and slipped another 0.069% in after-hours trading.




