Marketdash

OpenAI Rolls Out Teen Safety Features While Deepening Government Research Ties

MarketDash Editorial Team
14 hours ago
Sam Altman's OpenAI is making parallel moves on two fronts: formalizing a Department of Energy research partnership to accelerate scientific discovery while updating ChatGPT safety protocols to better protect teenage users amid mounting legal pressures.

Sam Altman's OpenAI is playing chess on multiple boards right now. The company just formalized a research partnership with the U.S. Department of Energy while simultaneously overhauling how ChatGPT interacts with teenagers. It's the kind of dual strategy that signals a company trying to secure both its technological future and its social legitimacy at the same time.

Let's start with the science side. OpenAI signed a memorandum of understanding with the Department of Energy that's designed to supercharge its OpenAI for Science initiative. The basic idea is to figure out where AI and advanced computing can accelerate real scientific breakthroughs, particularly through DOE programs like the Genesis Mission.

OpenAI for Science isn't just a catchy name. It's about taking frontier AI models and plugging them into actual research workflows with real scientists working on serious problems. Think less "ChatGPT writes your lab report" and more "AI helps model protein folding or climate systems in ways humans couldn't do alone."

Building on What Already Works

This isn't OpenAI's first rodeo with the Department of Energy. The company has already been collaborating with DOE national laboratories, where its AI models are being tested in live research environments. Scientists at these facilities are using the technology to tackle genuinely difficult challenges, the kind that matter for energy, materials science, and national security.

The Genesis Mission brings together government agencies, national labs, and private industry to deploy advanced AI and computing power specifically for scientific discovery. The new MOU creates a formal framework for information sharing and coordination, while setting the stage for future contracts as specific projects take shape.

OpenAI has also been busy submitting recommendations to the White House Office of Science and Technology Policy about how the U.S. can maintain its science and technology edge through strategic AI deployment. It's the corporate version of raising your hand in class.

Protecting Younger Users

On the safety front, OpenAI updated its Model Spec to include new protections specifically for users under 18. The new Under-18 Principles outline how ChatGPT should behave differently when interacting with teens aged 13 to 17, recognizing that teenagers have different needs and vulnerabilities than adults.

The core rules still apply across the board, but this update clarifies how those rules get interpreted when a teenager is on the other side of the conversation. It's about creating an age-appropriate experience where safety concerns take priority.

Here's where it gets interesting: OpenAI is rolling out an age-prediction model on ChatGPT consumer plans. The system will try to figure out whether an account likely belongs to a minor and automatically apply teen safety protections. If the system can't make a confident determination or lacks complete information, it defaults to treating the user as under 18. Adults who get caught in that net will have options to verify their age.

This safety push isn't happening in a vacuum. OpenAI is facing serious legal heat in 2025, defending itself against multiple lawsuits that allege ChatGPT has caused real harm. Some of these cases involve wrongful death and suicide claims tied directly to user interactions with the platform. When you're dealing with litigation like that, updating your teen safety protocols stops being optional.

Going Global

Meanwhile, OpenAI is preparing to appoint former UK Chancellor George Osborne to a senior global position leading its "OpenAI for Countries" initiative. It's another signal that the company is thinking bigger than just the U.S. market as competition over AI infrastructure heats up worldwide.

The dual announcement captures where OpenAI finds itself right now: trying to be a leader in cutting-edge science while also proving it can be trusted with millions of users, including vulnerable ones. Whether it can pull off both remains to be seen, but at least the company seems to understand that's the assignment.

OpenAI Rolls Out Teen Safety Features While Deepening Government Research Ties

MarketDash Editorial Team
14 hours ago
Sam Altman's OpenAI is making parallel moves on two fronts: formalizing a Department of Energy research partnership to accelerate scientific discovery while updating ChatGPT safety protocols to better protect teenage users amid mounting legal pressures.

Sam Altman's OpenAI is playing chess on multiple boards right now. The company just formalized a research partnership with the U.S. Department of Energy while simultaneously overhauling how ChatGPT interacts with teenagers. It's the kind of dual strategy that signals a company trying to secure both its technological future and its social legitimacy at the same time.

Let's start with the science side. OpenAI signed a memorandum of understanding with the Department of Energy that's designed to supercharge its OpenAI for Science initiative. The basic idea is to figure out where AI and advanced computing can accelerate real scientific breakthroughs, particularly through DOE programs like the Genesis Mission.

OpenAI for Science isn't just a catchy name. It's about taking frontier AI models and plugging them into actual research workflows with real scientists working on serious problems. Think less "ChatGPT writes your lab report" and more "AI helps model protein folding or climate systems in ways humans couldn't do alone."

Building on What Already Works

This isn't OpenAI's first rodeo with the Department of Energy. The company has already been collaborating with DOE national laboratories, where its AI models are being tested in live research environments. Scientists at these facilities are using the technology to tackle genuinely difficult challenges, the kind that matter for energy, materials science, and national security.

The Genesis Mission brings together government agencies, national labs, and private industry to deploy advanced AI and computing power specifically for scientific discovery. The new MOU creates a formal framework for information sharing and coordination, while setting the stage for future contracts as specific projects take shape.

OpenAI has also been busy submitting recommendations to the White House Office of Science and Technology Policy about how the U.S. can maintain its science and technology edge through strategic AI deployment. It's the corporate version of raising your hand in class.

Protecting Younger Users

On the safety front, OpenAI updated its Model Spec to include new protections specifically for users under 18. The new Under-18 Principles outline how ChatGPT should behave differently when interacting with teens aged 13 to 17, recognizing that teenagers have different needs and vulnerabilities than adults.

The core rules still apply across the board, but this update clarifies how those rules get interpreted when a teenager is on the other side of the conversation. It's about creating an age-appropriate experience where safety concerns take priority.

Here's where it gets interesting: OpenAI is rolling out an age-prediction model on ChatGPT consumer plans. The system will try to figure out whether an account likely belongs to a minor and automatically apply teen safety protections. If the system can't make a confident determination or lacks complete information, it defaults to treating the user as under 18. Adults who get caught in that net will have options to verify their age.

This safety push isn't happening in a vacuum. OpenAI is facing serious legal heat in 2025, defending itself against multiple lawsuits that allege ChatGPT has caused real harm. Some of these cases involve wrongful death and suicide claims tied directly to user interactions with the platform. When you're dealing with litigation like that, updating your teen safety protocols stops being optional.

Going Global

Meanwhile, OpenAI is preparing to appoint former UK Chancellor George Osborne to a senior global position leading its "OpenAI for Countries" initiative. It's another signal that the company is thinking bigger than just the U.S. market as competition over AI infrastructure heats up worldwide.

The dual announcement captures where OpenAI finds itself right now: trying to be a leader in cutting-edge science while also proving it can be trusted with millions of users, including vulnerable ones. Whether it can pull off both remains to be seen, but at least the company seems to understand that's the assignment.