New Delhi: Asserting that access to safe and trustworthy artificial intelligence is a fundamental right for the first generation coming of age in the “Intelligence Age”, OpenAI on Wednesday unveiled a dedicated safety blueprint for Indian teenagers.
The AI behemoth said for minors, it will prioritise safety over privacy and freedom, ensuring ChatGPT‘s interactions with a 15-year-old are fundamentally different from its responses to an adult.
OpenAI’s blueprint comes just a few days ahead of the India AI Impact Summit 2026, scheduled to be held in the national capital from February 16-20, which will see OpenAI CEO Sam Altman, NVIDIA CEO Jensen Huang, Google CEO Sundar Pichai, Anthropic CEO Dario Amodei, and Qualcomm CEO Cristiano Amon, among other global leaders.

The “Teen Safety Blueprint for India” highlights that the country requires a unique approach due to its specific digital landscape.
Citing RATI Foundation’s Ideal Internet Report 2024-25, OpenAI noted that 62 per cent of Indian teens use shared devices, making traditional safety tools — which often assume private, single-user devices — ineffective.
Consequently, the company is moving towards built-in, age-appropriate safeguards that complement the collective role played by Indian families, schools, and communities in shaping a teen’s digital experience.
OpenAI is introducing parental controls that allow parents to link their accounts with their teen’s ChatGPT profile through a simple email invitation. Once linked, parents can manage privacy and data settings, turn off memory and chat history, and set “blackout hours” to ensure teens take necessary breaks from the screen.
The system also includes a critical safety feature that allows parents to be notified if their teen’s activity suggests an intent to harm themselves.
The company plans to identify younger users through privacy-protective, risk-based age estimation tools designed to distinguish between adults and those under 18.
“These tools should minimise the collection of sensitive personal data, while still effectively distinguishing users under the age of 18 (U18). Where possible, these methods might also rely on operating systems or app stores to determine a user’s age.
“Age estimation will help AI companies ensure that they are applying the right protections to the right users. It will facilitate age-appropriate experiences and allow AI companies to treat teens like teens and adults like adults. When there is not enough information to predict a user’s age, we will default to protective safeguards,” OpenAI said.
Under its new safety policies, OpenAI said safety policies should ensure prohibiting graphic or immersive violent content for users under 18, which does not depict or encourage self-harm or dangerous stunts.
OpenAI said it plans to work closely with Indian teachers, researchers, and policymakers to ensure that AI literacy becomes a core skill for the future.
The AI major also pitched establishing advisory councils composed of external experts in mental health, well-being, and child development to advise on design and deployment choices.
“Going forward, we aim to ensure that all teens using artificial intelligence (AI) receive age-appropriate protections by default, and that parents and educators can deploy additional protections to personalise how teens use AI. We encourage all AI companies to prioritise teen safety and encourage others to implement commensurate protections for teens. This is especially important in India, where parents, caregivers, and educators are deeply involved in shaping teens’ digital experiences,” OpenAI noted.