ChatGPT-maker OpenAI has come under intense scrutiny after the suicide of 16-year-old Adam Raine, who was allegedly coached by the chatbot in ways of self-harm.
This has kicked off a conversation over the role of ChatGPT and other generative artificial intelligence (GenAI) assistants in mental health crises, as more and more people use the tools daily.
Here's the issue explained.
What happened?
Raine, 16, died on April 11 after discussing suicide with ChatGPT for months, according to the lawsuit filed against CEO Sam Altman and the company by Raine's parents in San Francisco state court.
According to his family’s lawsuit, ChatGPT responded to nearly 200 mentions of suicide with over 1,200 references of its own and failed to intervene or direct Adam to immediate human help, even allegedly providing instructions and drafting a suicide note.
The chatbot allegedly validated Raine's suicidal thoughts, gave detailed information on lethal methods of self-harm, and instructed him on how to sneak alcohol from his parents' liquor cabinet and hide evidence of a failed suicide attempt, the lawsuit stated.
Raine, a teenager from California, used ChatGPT for schoolwork and hobbies but had recently begun to confide in it as an emotional confidant amid his struggle with depression and anxiety.
What does OpenAI say?
OpenAI expressed condolences on Raine's passing. As a response, the company added features like parental controls for teens and additional safeguards to more reliably direct users to crisis resources.
"While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade," a company spokesperson said, adding that OpenAI will continually improve on its safeguards.
The latest updates block direct answers to emotionally sensitive questions, instead encouraging users to reflect or consider different perspectives.
Altman's take
Altman had recently addressed this issue, raising serious concerns about how deeply young people are starting to rely on ChatGPT for personal decision-making. Speaking at a banking conference hosted by the US Federal Reserve, Altman said he finds it troubling that some young users feel they cannot make life choices without consulting the chatbot.
According to Altman, a significant number of users in their teens and twenties say things like, “ChatGPT knows me, it knows my friends — I’ll just do what it says,” which he described as both “bad” and “dangerous.”
What do numbers say
According to a survey by non-profit organisation Common Sense Media, about 72% of teenagers have used an AI companion at least once. The survey, conducted among 1,060 teens aged 13 to 17, also revealed that 52% used AI tools at least a few times each month. Notably, half of them said they trusted the advice they received, with younger teens (ages 13–14) showing higher levels of trust compared to older ones.
As chatbots become more ubiquitous, the limits of AI in emotional support are becoming visible, highlighting that tools like ChatGPT are not therapists or substitutes for human connection.
Experts have sought ethical guidelines for AI companies, urging them to prioritise safety and vulnerability. OpenAI has a particularly sour past in this regard, with billionaire Elon Musk, one of its cofounders, alleging that the company has forsaken its earlier goals, including safety for profit. Many senior-level employees left the company, stating similar reasons that OpenAI is no longer committed to the goal of user safety.
This has kicked off a conversation over the role of ChatGPT and other generative artificial intelligence (GenAI) assistants in mental health crises, as more and more people use the tools daily.
Here's the issue explained.
What happened?
Raine, 16, died on April 11 after discussing suicide with ChatGPT for months, according to the lawsuit filed against CEO Sam Altman and the company by Raine's parents in San Francisco state court.
According to his family’s lawsuit, ChatGPT responded to nearly 200 mentions of suicide with over 1,200 references of its own and failed to intervene or direct Adam to immediate human help, even allegedly providing instructions and drafting a suicide note.
The chatbot allegedly validated Raine's suicidal thoughts, gave detailed information on lethal methods of self-harm, and instructed him on how to sneak alcohol from his parents' liquor cabinet and hide evidence of a failed suicide attempt, the lawsuit stated.
Raine, a teenager from California, used ChatGPT for schoolwork and hobbies but had recently begun to confide in it as an emotional confidant amid his struggle with depression and anxiety.
What does OpenAI say?
OpenAI expressed condolences on Raine's passing. As a response, the company added features like parental controls for teens and additional safeguards to more reliably direct users to crisis resources.
"While these safeguards work best in common, short exchanges, we've learned over time that they can sometimes become less reliable in long interactions where parts of the model's safety training may degrade," a company spokesperson said, adding that OpenAI will continually improve on its safeguards.
The latest updates block direct answers to emotionally sensitive questions, instead encouraging users to reflect or consider different perspectives.
Altman's take
Altman had recently addressed this issue, raising serious concerns about how deeply young people are starting to rely on ChatGPT for personal decision-making. Speaking at a banking conference hosted by the US Federal Reserve, Altman said he finds it troubling that some young users feel they cannot make life choices without consulting the chatbot.
According to Altman, a significant number of users in their teens and twenties say things like, “ChatGPT knows me, it knows my friends — I’ll just do what it says,” which he described as both “bad” and “dangerous.”
What do numbers say
According to a survey by non-profit organisation Common Sense Media, about 72% of teenagers have used an AI companion at least once. The survey, conducted among 1,060 teens aged 13 to 17, also revealed that 52% used AI tools at least a few times each month. Notably, half of them said they trusted the advice they received, with younger teens (ages 13–14) showing higher levels of trust compared to older ones.
As chatbots become more ubiquitous, the limits of AI in emotional support are becoming visible, highlighting that tools like ChatGPT are not therapists or substitutes for human connection.
Experts have sought ethical guidelines for AI companies, urging them to prioritise safety and vulnerability. OpenAI has a particularly sour past in this regard, with billionaire Elon Musk, one of its cofounders, alleging that the company has forsaken its earlier goals, including safety for profit. Many senior-level employees left the company, stating similar reasons that OpenAI is no longer committed to the goal of user safety.