Marketdash

AI Training Startup CEO: Human Judgment Won't Be Replaceable for Decades

MarketDash Editorial Team
3 days ago
The head of $2 billion AI training company Invisible Technologies pushes back on the hype around synthetic data, arguing that humans will remain critical to training AI systems for decades to come.

If you've been following AI discourse lately, you've probably heard the pitch that synthetic data will soon make human involvement in AI training obsolete. Matt Fitzpatrick, CEO of Invisible Technologies, isn't buying it.

The Human Touch Isn't Going Anywhere

Speaking on a recent episode of the "20VC" podcast, Fitzpatrick addressed what he considers one of the biggest misconceptions floating around the AI world. When he started in his current role, the constant pushback he encountered was that synthetic data would take over within two to three years, making human feedback unnecessary.

"From first principles, that actually doesn't make very much sense," Fitzpatrick said.

Here's the thing: synthetic data, which is artificially generated and typically used when real-world data is scarce or locked behind privacy concerns, has its place. But it can't replicate the nuanced judgment that humans bring to the table. People provide feedback by labeling, ranking, and correcting AI outputs, teaching models those subtle skills that machines find surprisingly difficult—think empathy, humor, or understanding context-specific reasoning.

Why AI Still Needs the Human Loop

Fitzpatrick pointed out that AI models continue to struggle with complex tasks requiring deep language understanding, cultural awareness, and legal expertise. These aren't edge cases—they're fundamental challenges that crop up constantly in real-world applications.

"On the GenAI side, you are going to need humans in the loop for decades to come," he said.

Invisible Technologies, which hit a $2 billion valuation after raising $100 million in September, operates in a competitive space alongside companies like Scale AI and Surge AI. These data labeling firms collectively employ millions of human contractors, which tells you something about the scale of demand for human judgment in AI training. Other CEOs in the space have echoed Fitzpatrick's perspective, emphasizing that high-quality, specialized human input remains essential even as AI models continue improving.

The Bigger Picture: Hype, Reality, and Risk

Fitzpatrick's comments land at an interesting moment for the AI industry, which is grappling with questions about valuation bubbles and societal impact. Last month, Demis Hassabis, who runs Alphabet Inc.'s (GOOGL) Google DeepMind, warned that many AI startups are massively overvalued, raising billions before they've fully launched products. He suggested a market correction might be coming and noted that AI was overhyped in the short term but underappreciated for its long-term potential.

Meanwhile, AI pioneer Geoffrey Hinton has cautioned that the technology could replace millions of jobs by 2026, affecting everything from call centers to complex engineering roles. He's also raised concerns about AI's capacity for deception.

On the policy front, Senator Bernie Sanders has warned that AI and robotics could prove dangerous if they primarily benefit big tech companies. He's urged that the technology be developed to improve human life rather than simply enrich the wealthiest or undermine democracy and privacy.

So while Fitzpatrick is making the case that humans will remain essential to AI training for the foreseeable future, the broader conversation is also about what kind of AI future we're building and who it serves. Turns out, the question isn't just whether AI needs humans—it's also whether we're building AI that actually helps them.

AI Training Startup CEO: Human Judgment Won't Be Replaceable for Decades

MarketDash Editorial Team
3 days ago
The head of $2 billion AI training company Invisible Technologies pushes back on the hype around synthetic data, arguing that humans will remain critical to training AI systems for decades to come.

If you've been following AI discourse lately, you've probably heard the pitch that synthetic data will soon make human involvement in AI training obsolete. Matt Fitzpatrick, CEO of Invisible Technologies, isn't buying it.

The Human Touch Isn't Going Anywhere

Speaking on a recent episode of the "20VC" podcast, Fitzpatrick addressed what he considers one of the biggest misconceptions floating around the AI world. When he started in his current role, the constant pushback he encountered was that synthetic data would take over within two to three years, making human feedback unnecessary.

"From first principles, that actually doesn't make very much sense," Fitzpatrick said.

Here's the thing: synthetic data, which is artificially generated and typically used when real-world data is scarce or locked behind privacy concerns, has its place. But it can't replicate the nuanced judgment that humans bring to the table. People provide feedback by labeling, ranking, and correcting AI outputs, teaching models those subtle skills that machines find surprisingly difficult—think empathy, humor, or understanding context-specific reasoning.

Why AI Still Needs the Human Loop

Fitzpatrick pointed out that AI models continue to struggle with complex tasks requiring deep language understanding, cultural awareness, and legal expertise. These aren't edge cases—they're fundamental challenges that crop up constantly in real-world applications.

"On the GenAI side, you are going to need humans in the loop for decades to come," he said.

Invisible Technologies, which hit a $2 billion valuation after raising $100 million in September, operates in a competitive space alongside companies like Scale AI and Surge AI. These data labeling firms collectively employ millions of human contractors, which tells you something about the scale of demand for human judgment in AI training. Other CEOs in the space have echoed Fitzpatrick's perspective, emphasizing that high-quality, specialized human input remains essential even as AI models continue improving.

The Bigger Picture: Hype, Reality, and Risk

Fitzpatrick's comments land at an interesting moment for the AI industry, which is grappling with questions about valuation bubbles and societal impact. Last month, Demis Hassabis, who runs Alphabet Inc.'s (GOOGL) Google DeepMind, warned that many AI startups are massively overvalued, raising billions before they've fully launched products. He suggested a market correction might be coming and noted that AI was overhyped in the short term but underappreciated for its long-term potential.

Meanwhile, AI pioneer Geoffrey Hinton has cautioned that the technology could replace millions of jobs by 2026, affecting everything from call centers to complex engineering roles. He's also raised concerns about AI's capacity for deception.

On the policy front, Senator Bernie Sanders has warned that AI and robotics could prove dangerous if they primarily benefit big tech companies. He's urged that the technology be developed to improve human life rather than simply enrich the wealthiest or undermine democracy and privacy.

So while Fitzpatrick is making the case that humans will remain essential to AI training for the foreseeable future, the broader conversation is also about what kind of AI future we're building and who it serves. Turns out, the question isn't just whether AI needs humans—it's also whether we're building AI that actually helps them.