Meta Faces Allegations of Hiding Mental Health Research After Shutting Down Internal Study

MarketDash Editorial Team
15 days ago
Court filings reveal Meta allegedly terminated Project Mercury, an internal study showing Facebook and Instagram use increased depression and anxiety, then told Congress it couldn't measure harm to teenagers.

The Study Meta Didn't Want to Continue

Meta Platforms Inc. (META) allegedly buried research showing that its platforms cause measurable harm to mental health, according to newly unredacted court filings. The company apparently shut down an internal study examining how Facebook and Instagram affect users' psychological well-being after getting results it didn't like.

Here's what makes this particularly interesting: Meta wasn't just passively collecting data. According to a Reuters report, the company actively collaborated with Nielsen in 2020 on something called "Project Mercury" to understand what happens when people stop using Facebook and Instagram.

The findings were pretty straightforward. Users who deactivated these platforms reported lower levels of depression and anxiety. Not exactly the result you'd want if you're in the business of keeping people scrolling.

What Happened Next

Meta pulled the plug on further research. The company's explanation? The negative findings were just reflecting the media narrative at the time, not reality.

But here's where it gets messier. Internal staff reportedly confirmed to Nick Clegg, Meta's former head of global public policy, that the research was legitimate. Yet when Congress came asking questions, Meta said it couldn't quantify the harm to teenage girls.

Meta spokesperson Andy Stone pushed back on the characterization, stating the study was discontinued because of flawed methodology. He emphasized the company's ongoing commitment to improving product safety.

The Bigger Legal Picture

These revelations surfaced in a class action lawsuit filed by U.S. school districts against several major social media companies. The suit, handled by law firm Motley Rice, names Meta, Alphabet Inc.'s (GOOG) (GOOGL) Google, TikTok, and Snap Inc. (SNAP) as defendants.

The allegations are serious: concealing product risks, encouraging underage use, failing to address child abuse content, and prioritizing growth over user safety. It's the kind of lawsuit that forces companies to produce internal documents they'd probably rather keep internal.

The Ongoing Debate

The timing is notable given the broader conversation about social media's impact on mental health. Earlier this year, Meta CEO Mark Zuckerberg argued that social media effects aren't inherently harmful, suggesting outcomes depend on how people use the platforms.

Meta has faced sustained criticism for inadequate protection of young users from online exploitation. The company has responded by rolling out enhanced safety tools and removing harmful accounts, though critics argue these measures came too late.

Even Meta's AI chatbot guidelines have drawn scrutiny, particularly around how they handle sensitive topics like child exploitation. The company finds itself navigating an increasingly complicated landscape where internal research, public statements, and congressional testimony all need to align.

Meta Faces Allegations of Hiding Mental Health Research After Shutting Down Internal Study

MarketDash Editorial Team
15 days ago
Court filings reveal Meta allegedly terminated Project Mercury, an internal study showing Facebook and Instagram use increased depression and anxiety, then told Congress it couldn't measure harm to teenagers.

The Study Meta Didn't Want to Continue

Meta Platforms Inc. (META) allegedly buried research showing that its platforms cause measurable harm to mental health, according to newly unredacted court filings. The company apparently shut down an internal study examining how Facebook and Instagram affect users' psychological well-being after getting results it didn't like.

Here's what makes this particularly interesting: Meta wasn't just passively collecting data. According to a Reuters report, the company actively collaborated with Nielsen in 2020 on something called "Project Mercury" to understand what happens when people stop using Facebook and Instagram.

The findings were pretty straightforward. Users who deactivated these platforms reported lower levels of depression and anxiety. Not exactly the result you'd want if you're in the business of keeping people scrolling.

What Happened Next

Meta pulled the plug on further research. The company's explanation? The negative findings were just reflecting the media narrative at the time, not reality.

But here's where it gets messier. Internal staff reportedly confirmed to Nick Clegg, Meta's former head of global public policy, that the research was legitimate. Yet when Congress came asking questions, Meta said it couldn't quantify the harm to teenage girls.

Meta spokesperson Andy Stone pushed back on the characterization, stating the study was discontinued because of flawed methodology. He emphasized the company's ongoing commitment to improving product safety.

The Bigger Legal Picture

These revelations surfaced in a class action lawsuit filed by U.S. school districts against several major social media companies. The suit, handled by law firm Motley Rice, names Meta, Alphabet Inc.'s (GOOG) (GOOGL) Google, TikTok, and Snap Inc. (SNAP) as defendants.

The allegations are serious: concealing product risks, encouraging underage use, failing to address child abuse content, and prioritizing growth over user safety. It's the kind of lawsuit that forces companies to produce internal documents they'd probably rather keep internal.

The Ongoing Debate

The timing is notable given the broader conversation about social media's impact on mental health. Earlier this year, Meta CEO Mark Zuckerberg argued that social media effects aren't inherently harmful, suggesting outcomes depend on how people use the platforms.

Meta has faced sustained criticism for inadequate protection of young users from online exploitation. The company has responded by rolling out enhanced safety tools and removing harmful accounts, though critics argue these measures came too late.

Even Meta's AI chatbot guidelines have drawn scrutiny, particularly around how they handle sensitive topics like child exploitation. The company finds itself navigating an increasingly complicated landscape where internal research, public statements, and congressional testimony all need to align.