OpenAI Poised to Strike Back With More Powerful Model, Says 'Nvidia Way' Author

MarketDash Editorial Team
4 days ago
Despite entering a reported 'Code Red' mode, OpenAI is positioned to reclaim AI leadership in coming months by leveraging Nvidia's advanced NVL72 hardware clusters, according to tech analyst Tae Kim.

Reports of OpenAI's internal "Code Red" might sound alarming, but Tae Kim—author of 'The Nvidia Way'—thinks the ChatGPT maker is about to flip the script. His prediction? OpenAI will reclaim its position at the front of the AI pack with a dramatically more powerful model arriving within months.

The Hardware Edge That Changes Everything

Speaking on CNBC International, Kim made the case that AI's fundamental scaling laws are alive and well, despite recent hand-wringing about whether progress is slowing down. The key difference this time around isn't just more compute—it's better compute.

OpenAI is positioned to train its next model on Nvidia Corp. (NVDA) NVL72 server clusters, hardware that Alphabet Inc. (GOOG) (GOOGL) Google and Anthropic didn't have when they trained Gemini 3 and Claude. According to Kim, Microsoft Corp. (MSFT) data centers are now filling up with this new architecture, giving OpenAI a training advantage its competitors lacked during their most recent development cycles.

"OpenAI is going to train on those new, more powerful clusters," Kim explained. "I fully expect in the coming months, OpenAI will come out with a much better performing model, just following the scaling laws that Anthropic and Google have done."

In other words, the recent performance gains from competitors aren't evidence that OpenAI is falling behind—they're proof that the underlying technology is still improving rapidly. And OpenAI is about to ride that same wave with superior hardware.

Crisis Mode or Strategic Reset?

Kim's optimistic forecast comes as internal leaks suggest OpenAI has entered what employees are calling a "Code Red" state. The company has reportedly frozen side projects including search advertising and shopping agents to concentrate resources entirely on model quality.

The pressure is real. Google's Gemini 3 has reportedly been eating into ChatGPT's market share and user retention, exposing what some are calling a "reasoning deficit" in OpenAI's current offerings. CEO Sam Altman has apparently paused monetization efforts to address the competitive threat head-on.

But if Kim is right, this isn't panic—it's focus. OpenAI is consolidating around what matters most: building a model that can leapfrog the competition using hardware advantages that weren't available just months ago.

Why This Isn't a Bubble

Beyond the immediate battle for AI supremacy, Kim pushed back hard against comparisons to the dot-com crash. The valuation picture today looks nothing like 2000, he argued.

Back then, market darlings traded at 100 times forward earnings with little revenue to justify the hype. Today's AI leaders? They're trading at 25 to 30 times earnings—high, sure, but grounded in actual business fundamentals.

"We are just at the beginning of a multi-year, multi-quarter acceleration of AI demand," Kim said, pointing to what he called "overwhelming demand" that's forced hyperscalers like Microsoft and Amazon to double their data center capacity targets for the year ahead.

The infrastructure buildout alone tells the story. Companies aren't just betting on AI—they're pouring concrete and racking servers at a pace that suggests they see sustained, long-term demand. That's not bubble behavior. That's belief backed by capital expenditure.

So while OpenAI might be in crisis mode operationally, the broader industry fundamentals look solid. And if Kim's read is correct, the company that sparked the current AI boom is about to remind everyone why it got there first.

OpenAI Poised to Strike Back With More Powerful Model, Says 'Nvidia Way' Author

MarketDash Editorial Team
4 days ago
Despite entering a reported 'Code Red' mode, OpenAI is positioned to reclaim AI leadership in coming months by leveraging Nvidia's advanced NVL72 hardware clusters, according to tech analyst Tae Kim.

Reports of OpenAI's internal "Code Red" might sound alarming, but Tae Kim—author of 'The Nvidia Way'—thinks the ChatGPT maker is about to flip the script. His prediction? OpenAI will reclaim its position at the front of the AI pack with a dramatically more powerful model arriving within months.

The Hardware Edge That Changes Everything

Speaking on CNBC International, Kim made the case that AI's fundamental scaling laws are alive and well, despite recent hand-wringing about whether progress is slowing down. The key difference this time around isn't just more compute—it's better compute.

OpenAI is positioned to train its next model on Nvidia Corp. (NVDA) NVL72 server clusters, hardware that Alphabet Inc. (GOOG) (GOOGL) Google and Anthropic didn't have when they trained Gemini 3 and Claude. According to Kim, Microsoft Corp. (MSFT) data centers are now filling up with this new architecture, giving OpenAI a training advantage its competitors lacked during their most recent development cycles.

"OpenAI is going to train on those new, more powerful clusters," Kim explained. "I fully expect in the coming months, OpenAI will come out with a much better performing model, just following the scaling laws that Anthropic and Google have done."

In other words, the recent performance gains from competitors aren't evidence that OpenAI is falling behind—they're proof that the underlying technology is still improving rapidly. And OpenAI is about to ride that same wave with superior hardware.

Crisis Mode or Strategic Reset?

Kim's optimistic forecast comes as internal leaks suggest OpenAI has entered what employees are calling a "Code Red" state. The company has reportedly frozen side projects including search advertising and shopping agents to concentrate resources entirely on model quality.

The pressure is real. Google's Gemini 3 has reportedly been eating into ChatGPT's market share and user retention, exposing what some are calling a "reasoning deficit" in OpenAI's current offerings. CEO Sam Altman has apparently paused monetization efforts to address the competitive threat head-on.

But if Kim is right, this isn't panic—it's focus. OpenAI is consolidating around what matters most: building a model that can leapfrog the competition using hardware advantages that weren't available just months ago.

Why This Isn't a Bubble

Beyond the immediate battle for AI supremacy, Kim pushed back hard against comparisons to the dot-com crash. The valuation picture today looks nothing like 2000, he argued.

Back then, market darlings traded at 100 times forward earnings with little revenue to justify the hype. Today's AI leaders? They're trading at 25 to 30 times earnings—high, sure, but grounded in actual business fundamentals.

"We are just at the beginning of a multi-year, multi-quarter acceleration of AI demand," Kim said, pointing to what he called "overwhelming demand" that's forced hyperscalers like Microsoft and Amazon to double their data center capacity targets for the year ahead.

The infrastructure buildout alone tells the story. Companies aren't just betting on AI—they're pouring concrete and racking servers at a pace that suggests they see sustained, long-term demand. That's not bubble behavior. That's belief backed by capital expenditure.

So while OpenAI might be in crisis mode operationally, the broader industry fundamentals look solid. And if Kim's read is correct, the company that sparked the current AI boom is about to remind everyone why it got there first.