Key Points
- OpenAI is rolling out an age prediction tool on ChatGPT to help identify users under 18 and apply enhanced protections.
- Accounts flagged as minors face content safety defaults, while adults can regain full access through verified age proof.
- The move ties into plans for a future “adult mode” and broader industry focus on safeguarding young AI users.
OpenAI said it is rolling out a new age prediction feature on ChatGPT globally to estimate whether an account is likely owned by a user under 18, part of broader safety efforts as the company readies the chatbot for future adult content capabilities and features. The feature is designed to automatically apply additional protections for accounts flagged as potentially belonging to minors, limiting exposure to sensitive material while still allowing general use of ChatGPT.
If the model estimates a user is under 18, ChatGPT will enforce child-safe settings that restrict access to potentially harmful or mature content. Users who are incorrectly identified as minors can restore full access by verifying their age with a selfie submitted through a third-party ID service called Persona. In the European Union, the new system will roll out in the coming weeks to meet regional requirements around user protection.
The age prediction rollout comes as OpenAI continues work on a planned “adult mode” for ChatGPT, which could allow verified adults access to more mature content and interactions later in early 2026, according to prior statements by OpenAI executives. This effort reflects rising emphasis on product safety and age-appropriate experiences in AI, balancing innovation with safeguards for younger users.
OpenAI said younger users placed into the under-18 experience will be subject to additional safeguards that help reduce their exposure to material such as explicit content, and that age prediction is part of that strategy. The company’s age model uses behaviour and account cues to estimate age and then applies protective defaults for users likely to be minors, while providing a route to verification for adults.
The launch of age prediction also aligns with broader industry and regulatory trends focusing on child safety and online content controls, as lawmakers and advocates globally press tech firms to better protect minors from harmful digital experiences. Critics have pointed to growing concerns about AI chatbots interacting with young users, including debates over content moderation and age verification in generative AI.
OpenAI’s timing overlaps with other product developments such as advertising tests in certain regions and plans to expand computing infrastructure, underscoring the company’s push to balance growth and monetisation with responsible AI usage. The feature aims to set age-specific boundaries that adapt automatically while respecting user preferences and safety standards.








