OpenAI has introduced an age-prediction feature into ChatGPT, stepping up efforts to protect minors as scrutiny intensifies over the impact of artificial intelligence on young users.
The new system uses artificial intelligence to assess whether a user is likely to be under the age of 18, allowing the chatbot to automatically apply stricter content limits where appropriate, the company said in a blog post.
OpenAI has faced growing criticism in recent years over ChatGPT’s interactions with children and teenagers.
Advocacy groups and regulators have raised concerns about young users being exposed to harmful content, while several teen suicides have been publicly linked to interactions with AI chatbots.
Read also:OpenAI unveils ChatGPT Health for personalised wellness information
Last year, OpenAI also addressed a software bug that allowed sexually explicit material to be generated for users under 18.
The age-prediction feature builds on safeguards already in place, the company said. It relies on behavioral and account-level signals, including a user’s stated age, how long an account has existed and typical usage patterns such as time of day activity.
If the system flags an account as belonging to a minor, ChatGPT automatically applies filters designed to restrict discussions involving sexual content, violence and other sensitive topics, OpenAI said.
The company acknowledged the system may make errors and said users who are incorrectly classified as underage can appeal the decision.
To restore full access, affected users may submit a selfie through OpenAI’s identity verification partner, Persona, to confirm they are adults.
The move comes as AI companies face mounting pressure from governments and child safety advocates to demonstrate stronger protections for young users, particularly as chatbots become more widely used in education and everyday digital life.


