Former Safety Engineer Alleges Figure AI Fired Him After Warning About Dangerous Humanoid Robots

MarketDash Editorial Team
16 days ago
Robert Gruendel claims Figure AI's humanoid robots can generate enough force to crack skulls, and says he was terminated after executives gutted his safety roadmap following a $39 billion funding round. The company says he was fired for poor performance.

Figure AI, the humanoid robotics startup backed by Nvidia Corp (NVDA) and Microsoft Corp (MSFT), is facing a federal whistleblower lawsuit that reads like a sci-fi cautionary tale. Robert Gruendel, the company's former head of product safety, claims he was shown the door shortly after warning executives that their robots were dangerously strong and that critical safety protocols were being watered down.

Robots Strong Enough to Crack Skulls

According to Gruendel's lawsuit, Figure AI's humanoid robots pack enough punch to crack a human skull. He points to one incident where a malfunctioning robot left a visible cut in a steel refrigerator door, which is the kind of detail that makes you reconsider those optimistic robot-butler timelines. Gruendel says he raised these concerns repeatedly, but his warnings were treated as inconvenient obstacles rather than essential safety checks.

The allegations paint a picture of a company racing toward commercialization while the safety team waved red flags that apparently nobody wanted to see.

Safety Plan Allegedly Gutted Before Investor Pitch

Here's where things get particularly spicy. Gruendel claims he prepared a comprehensive safety roadmap for prospective investors—the kind of document you'd want to see before backing a company building powerful humanoid robots. But according to the lawsuit, executives took that detailed plan and "gutted" it before showing it to backers who eventually helped Figure AI reach a valuation around $39 billion.

The implication is clear: investors may have been given a rosier picture of the company's safety readiness and regulatory compliance than what Gruendel believed was accurate. That's a serious allegation in any industry, but especially in one where the product can apparently dent steel.

Figure AI Pushes Back

Figure AI isn't accepting these claims quietly. The company says Gruendel was terminated for poor performance, not retaliation, and that his allegations misrepresent their work, according to reports. MarketDash reached out to Figure AI for additional comment, but the company did not immediately respond.

Gruendel's attorney counters that California law specifically protects employees who report unsafe practices, and argues this case highlights broader concerns about how quickly humanoid robots are being pushed toward commercial deployment. It's the classic tension between moving fast and breaking things versus, well, things that could break you.

As humanoid robotics inches closer to mainstream adoption, cases like this raise uncomfortable questions about who's minding the safety shop—and what happens when those minders get uncomfortable.

Former Safety Engineer Alleges Figure AI Fired Him After Warning About Dangerous Humanoid Robots

MarketDash Editorial Team
16 days ago
Robert Gruendel claims Figure AI's humanoid robots can generate enough force to crack skulls, and says he was terminated after executives gutted his safety roadmap following a $39 billion funding round. The company says he was fired for poor performance.

Figure AI, the humanoid robotics startup backed by Nvidia Corp (NVDA) and Microsoft Corp (MSFT), is facing a federal whistleblower lawsuit that reads like a sci-fi cautionary tale. Robert Gruendel, the company's former head of product safety, claims he was shown the door shortly after warning executives that their robots were dangerously strong and that critical safety protocols were being watered down.

Robots Strong Enough to Crack Skulls

According to Gruendel's lawsuit, Figure AI's humanoid robots pack enough punch to crack a human skull. He points to one incident where a malfunctioning robot left a visible cut in a steel refrigerator door, which is the kind of detail that makes you reconsider those optimistic robot-butler timelines. Gruendel says he raised these concerns repeatedly, but his warnings were treated as inconvenient obstacles rather than essential safety checks.

The allegations paint a picture of a company racing toward commercialization while the safety team waved red flags that apparently nobody wanted to see.

Safety Plan Allegedly Gutted Before Investor Pitch

Here's where things get particularly spicy. Gruendel claims he prepared a comprehensive safety roadmap for prospective investors—the kind of document you'd want to see before backing a company building powerful humanoid robots. But according to the lawsuit, executives took that detailed plan and "gutted" it before showing it to backers who eventually helped Figure AI reach a valuation around $39 billion.

The implication is clear: investors may have been given a rosier picture of the company's safety readiness and regulatory compliance than what Gruendel believed was accurate. That's a serious allegation in any industry, but especially in one where the product can apparently dent steel.

Figure AI Pushes Back

Figure AI isn't accepting these claims quietly. The company says Gruendel was terminated for poor performance, not retaliation, and that his allegations misrepresent their work, according to reports. MarketDash reached out to Figure AI for additional comment, but the company did not immediately respond.

Gruendel's attorney counters that California law specifically protects employees who report unsafe practices, and argues this case highlights broader concerns about how quickly humanoid robots are being pushed toward commercial deployment. It's the classic tension between moving fast and breaking things versus, well, things that could break you.

As humanoid robotics inches closer to mainstream adoption, cases like this raise uncomfortable questions about who's minding the safety shop—and what happens when those minders get uncomfortable.