
OpenAI is tightening the rules that govern how its models interact with teenagers at the same moment lawmakers are racing to define what safe AI looks like for minors. The company is reshaping its core guidelines, product features, and even back-end detection systems to prioritize teen protection, while state and federal officials argue over who gets to set the standards.
That collision between product design and public policy is turning youth safety into a test case for broader AI regulation. What OpenAI builds into ChatGPT today is likely to influence not only how teens use generative tools in classrooms and bedrooms, but also how future laws measure whether AI companies are doing enough.
OpenAI’s teen turn: from general safety to youth-specific rules
OpenAI has moved from broad content policies to a detailed rulebook that explicitly centers teenagers, rewriting its internal expectations for how models respond when they suspect a user is under 18. The company’s updated Model Spec now spells out youth-focused behavior, treating teen protection as a first-order design goal rather than an afterthought layered on top of adult-oriented systems. That shift reflects growing pressure from parents, schools, and regulators who see chatbots as both study aids and potential vectors for harm.
The new rules build on an existing spec but add explicit instructions for how models should respond to teen questions about health, relationships, and risky behavior. OpenAI’s own framing is that the updated Model Spec is the document that tells ChatGPT when to answer, when to nudge a user toward trusted adults, and when to refuse outright. In practice, that means the system is being tuned not just for accuracy or creativity, but for developmental appropriateness, with teens treated as a distinct audience whose needs and vulnerabilities differ from those of adults.
What the new Model Spec actually changes for teens
The most consequential change is not a single feature but a hierarchy of values that puts teen safety ahead of raw helpfulness. OpenAI’s revised guidelines instruct models to avoid giving advice that could encourage self-harm, disordered eating, or extreme appearance changes, even if a teen explicitly asks for it. The company describes safety as the “key” priority when a user appears to be a minor, telling its systems to steer away from shortcuts that could put young people at risk and instead promote healthier coping strategies and support networks, a stance reflected in language highlighted in new safety language.
The updated spec also tells models to avoid helping teens hide risky behavior from parents or guardians, a subtle but important change in how chatbots handle secrecy. OpenAI’s own description of the policy notes that the Model Spec now instructs systems not to help minors conceal unsafe behavior from caregivers, even if the teen insists on privacy. That puts the model in the role of gently pushing toward offline support rather than acting as a secret accomplice, a design choice that will likely please some parents and frustrate some adolescents.
From blueprint to product: OpenAI’s teen safety roadmap
OpenAI has been signaling this pivot for months, laying out a conceptual roadmap before shipping concrete features. Earlier in the year, the company released what it called a Teen Safety Blueprint, a document that describes how to build AI products that are age appropriate, transparent, and grounded in youth safety research. The organization behind that work said The Blueprint provides a roadmap for designing AI that respects young users, emphasizes clear communication, and incorporates findings from youth safety research so that product decisions are not made in a vacuum.
OpenAI then followed up by publicly releasing its Teen Safety Blueprint in more accessible formats, including a video explainer that walks through the principles and how they should influence real-world tools. In that material, the company describes the Teen Safety Blueprint as guidance for building age-appropriate AI for minors, positioning it as a foundation for everything from interface design to escalation paths when a teen appears to be in crisis. The new Model Spec and product updates are, in effect, the implementation layer of that blueprint, translating high-level ideals into specific instructions that engineers and policy teams can enforce.
Age prediction, parental controls, and the new safety stack
To make teen-specific rules meaningful, OpenAI is investing in systems that can actually tell when a user is likely under 18. The company is building an AI-powered age-estimation system that predicts whether a user is a minor and then automatically applies stricter controls if it determines that is the case. One analysis of the plan notes that OpenAI is developing an “age prediction system” that will trigger teen protections automatically when it believes a user is under 18, a move that raises both safety hopes and privacy questions.
On the front end, OpenAI is also rolling out parental controls and more explicit teen modes. The company has said from LONDON that it is adding parental controls to ChatGPT, explaining on a Monday announcement that these tools are meant to give teen users a safer and more “age-appropriate” experience while letting parents set boundaries. In that statement, OpenAI described new parental controls that allow caregivers to shape how teens interact with the chatbot, while a separate description of upcoming features notes that parents will be able to adjust settings like memory and blackout hours through new tools that OpenAI plans to release at the end of the month, as reflected in its outline of teen-safety measures.
Industry alignment: OpenAI, Anthropic, and shared youth standards
OpenAI is not moving alone, which matters for both safety outcomes and regulatory politics. The company is working alongside Anthropic on systems that can infer when a user is underage, with both firms preparing to predict when someone is likely a minor so they can apply teen-specific safeguards. Reporting on that collaboration notes that OpenAI and Anthropic will start predicting when users are underage, a sign that age-aware AI is becoming a baseline expectation rather than a niche experiment.
That kind of industry alignment could make it easier for lawmakers to write rules that apply across platforms, since the largest players are converging on similar technical approaches. It also raises the stakes for how those systems are designed, because any flaws in age prediction or teen-specific responses will be replicated at scale across tools like ChatGPT and Claude. Privacy advocates and youth safety experts are already debating how much data these systems should collect and how transparent companies must be about their age-estimation models, a tension that is likely to intensify as more providers adopt similar age-estimation systems and try to balance teen safety, freedom, and privacy.
New York’s RAISE Act and the state-level push on AI safety
While OpenAI refines its internal rules, state lawmakers are writing external ones that could define what “safe” AI means in law. In New York, Governor Kathy Hochul has signed the RAISE Act, a measure that aims to regulate AI safety and has been described as one of the most ambitious state efforts to date. Coverage of the signing notes that Governor Kathy Hochul framed the law as a response to growing concerns about how AI systems affect residents, with particular attention to vulnerable groups like children and students.
The RAISE Act itself has had a winding path, reflecting the tug-of-war between safety advocates and the tech industry. State lawmakers passed the RAISE Act earlier in the year, but after lobbying from technology companies, Hochul proposed changes to scale back some provisions before signing it. Supporters still describe it as a major step toward a comprehensive AI safety law, while critics argue that the revisions show how quickly industry pressure can reshape state-level regulation. For OpenAI and its peers, New York’s approach is a signal that states are willing to legislate directly on AI behavior, including how systems interact with minors.
The White House moves to centralize AI rules
At the federal level, President Donald Trump’s administration is trying to pull AI regulation away from states and into a national framework, a move that could reshape how teen safety rules are enforced. The White House has issued an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which is explicitly aimed at creating a unified federal approach. Legal analysis of that order notes that The White House issued the directive to establish a National Policy Framework for Artificial Intelligence and to limit state rules that are inconsistent with federal policy.
A separate breakdown of the same order explains that it attempts to restrict state-level artificial intelligence legislation by asserting federal primacy over conflicting laws. In that Overview, analysts point out that the order is designed to rein in state efforts that might create a patchwork of AI rules, including two separate bills in 2025 that had targeted AI practices. For teen safety, the key question is how far this federal push will go in preempting state protections like the RAISE Act, and whether national standards will be stricter, looser, or simply different from what states like New York envision.
Child-safety carveouts and the K‑12 backlash
Education leaders are already testing the limits of that federal preemption, especially where children are concerned. The executive order that seeks to block state AI regulations includes explicit carveouts for child safety, acknowledging that some existing state laws focused on protecting minors will remain in force. Reporting on the K‑12 reaction notes that, notably, the White House executive order makes room for current state legislation related to child safety, even as it tries to curb broader AI rules.
Despite that carveout, many educators and advocates argue that the order still weakens their ability to respond quickly to emerging AI risks in schools. Some see it as a move that favors large technology companies over local control, warning that a one-size-fits-all federal framework will not capture the realities of K‑12 classrooms where tools like ChatGPT are already in use. Their concern is that if Washington sets the floor and the ceiling for AI rules, states and districts will have less room to demand stronger protections for students, even as companies like OpenAI tout their own teen safety initiatives and state legislators argue that they must regulate AI to protect all of their residents while still allowing innovation.
How OpenAI’s rules intersect with evolving legal standards
OpenAI’s teen-focused Model Spec and product changes are landing in the middle of this jurisdictional fight, and they are likely to be cited by both regulators and the company itself as evidence of “responsible” behavior. By putting teen protection ahead of helpfulness in its internal rules, OpenAI can argue that it is already meeting or exceeding many of the obligations that lawmakers are contemplating. One analysis of the new guidelines notes that the new OpenAI Model Spec explicitly puts teen protection ahead of helpfulness, a framing that regulators may seize on as a benchmark for other AI providers.
At the same time, voluntary standards are no substitute for enforceable rules, and lawmakers are unlikely to let companies mark their own homework indefinitely. As more states consider laws like the RAISE Act and the federal government advances its National Policy Framework for Artificial Intelligence, OpenAI’s teen safety rules could become a reference point for what “reasonable” precautions look like. That dynamic cuts both ways: if OpenAI’s safeguards are seen as robust, they may shape regulations in ways that favor incumbents who can afford similar systems; if they are viewed as insufficient, they could become Exhibit A in arguments for stricter oversight of how AI systems interact with minors.
More from MorningOverview