Here's an uncomfortable question gaining traction in Silicon Valley: What if the thing we're racing to build is the thing we should be running away from?
Mustafa Suleyman, who runs artificial intelligence at Microsoft Corp. (MSFT), recently sat down with the "Silicon Valley Girl Podcast" to discuss exactly that concern. His take? Building autonomous superintelligence should be what he calls "the anti-goal" of the entire industry.
"It would be very hard to contain something like that or align it to our values. And so that should be the anti-goal," he said.
That's a striking stance from someone at the center of the AI arms race, especially given how much money and computing power companies are pouring into frontier AI systems.
The Case For Human-Centered AI
Suleyman explained to host Marina Mogilko that autonomous superintelligence—meaning a system that can improve itself, set its own goals, and operate independently of humans—simply "doesn't feel like a positive vision of the future."
The problem isn't just theoretical. A system capable of independent action and self-directed goals would be inherently difficult to control. Its very independence creates risks that Suleyman believes the industry should actively work to avoid.
Instead, Suleyman is pushing what he calls "humanist superintelligence." Before co-founding Microsoft's (MSFT) AI efforts, he helped start Alphabet Inc.'s (GOOGL) Google DeepMind unit, so he knows this terrain well. His vision involves building models designed to operate in service of human interests—tools that support rather than supplant human decision-making.
He also pushed back against the increasingly popular notion that AI systems might deserve moral consideration or possess consciousness. "These things don't suffer. They don't feel pain," he said, emphasizing that current models are "just simulating high-quality conversation," not experiencing genuine emotion or self-awareness.
Meanwhile, The Race Accelerates
Suleyman's caution stands in contrast to the aggressive timelines other tech leaders are publicly sharing. As training infrastructure improves and investment floods into the sector, predictions about advanced AI have gotten bolder.
OpenAI CEO Sam Altman wrote on his blog earlier this year that superintelligent tools could push scientific discovery far beyond current human capabilities, potentially unlocking unprecedented abundance and prosperity. In September, he told German newspaper Die Welt he would be "very surprised" if superintelligence doesn't arrive by 2030. That timeline has become a reference point in industry discussions about what's coming and how fast.
Google DeepMind co-founder Demis Hassabis offered his own forecast in April, telling Time magazine that AGI could emerge "in the next five to 10 years." He envisioned future systems "embedded in everyday lives" with the ability to understand the world "in nuanced ways."
The Skeptics Aren't Convinced
Not everyone buys into these timelines or the underlying assumptions driving them. Meta Platforms Inc. (META) chief AI scientist Yann LeCun has become one of the most prominent voices urging caution against the hype.
Speaking at the World Economic Forum in January, LeCun argued that current AI systems fundamentally cannot reason, plan, or understand the physical world. Achieving human-level AI, he said, will require "an entirely new paradigm"—not just bigger models trained on more data.
He doubled down on that view during an April talk at the National University of Singapore, noting that "most interesting problems scale extremely badly." In other words, simply throwing more computing power and training data at today's systems won't solve their underlying limitations.
So we've got one camp racing toward superintelligence with bold timelines, another camp warning that the goal itself is dangerous, and a third camp saying we're not even close with current approaches. The only thing everyone seems to agree on is that the stakes couldn't be higher.