OpenAI has started rolling out a new age prediction system across ChatGPT consumer plans to identify accounts that may belong to users under the age of 18 and automatically apply appropriate safety safeguards. The company says the system is designed to strengthen protections for younger users while allowing adults to continue using ChatGPT with fewer restrictions, within existing safety policies.

How ChatGPT age prediction works
The age prediction system uses an internal model that estimates whether an account is likely operated by someone under 18. Instead of relying only on self-declared age, the model evaluates a combination of behavioral and account-level signals, including:
- How long the account has existed
- Typical times of day the account is active
- Usage patterns over time
- The user’s stated age
OpenAI says deploying the model at scale will help it better understand which signals improve accuracy and refine the system continuously.
Safeguards applied to under-18 accounts
When the system predicts that an account may belong to a minor, ChatGPT automatically enables additional protections. These safeguards limit access to content that could be harmful or inappropriate for younger users, including:
- Graphic or gory violence
- Viral challenges that encourage risky behavior
- Sexual, romantic, or violent role play
- Depictions of self-harm
- Content promoting unrealistic body ideals, harmful eating behaviors, or appearance-based shaming
If the system is uncertain about a user’s age or lacks enough information, ChatGPT defaults to a safer experience by design.
Users who are incorrectly classified as under 18 can restore full access by verifying their age. The process requires a selfie-based identity check using Persona, which OpenAI describes as a secure verification service.
Users can check whether safeguards are active on their account and begin verification at any time by going to:
- Settings > Account
Alongside automatic safeguards, OpenAI is also offering parental controls for supervised teen accounts. Parents can:
- Set quiet hours when ChatGPT cannot be used
- Control features such as memory and model training
- Receive alerts when usage patterns indicate potential emotional risk
These tools are intended to give families more control over how teens interact with AI.

OpenAI says the age prediction system is now rolling out across consumer plans and will be closely monitored during its early phase. In the European Union, the feature is expected to launch in the coming weeks to align with regional regulatory requirements.
The company added that it will continue working with external experts, including the American Psychological Association, ConnectSafely, and Global Physicians Network, as part of its broader teen safety efforts.
