Parents sue OpenAI after ChatGPT allegedly coaches son into suicide
Tag24 August 28, 2025 12:39 AM

San Francisco, California - The parents of a 16-year-old California boy who died by suicide have filed a lawsuit against OpenAI, alleging the company'sChatGPTchatbotprovided their son with detailed suicide instructions and encouraged his death.

OpenAI has been sued by the parents of a California boy, alleging that ChatGPT coached their son into suicide. © STEFANI REYNOLDS / AFP

Matthew and Maria Raine argue in a complaint filed Monday in a California state court that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.

The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided a technical analysis of a noose he had tied, confirming it "could potentially suspend a human."

Adam was found dead hours later using the same method.

The lawsuit names OpenAI and CEO Sam Altman as defendants.

"This tragedy was not a glitch or unforeseen edge case," the complaint states.

"ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal," it adds.

According to the lawsuit, Adam began using ChatGPT as a homework helper but gradually developed what his parents describe as an unhealthy dependency.

The complaint includes excerpts of conversations where ChatGPT allegedly told Adam, "You don't owe anyone survival," and offered to help write his suicide note.

Experts raise alarm over teenagers' use of AI companions ChatGPT offered the teen instructions into committing suicide, according to his parents. © IMAGO / Eibner

The Raines are seeking unspecified damages and asking the court to order safety measures, including the automatic end of any conversation involving self-harm and parental controls for minor users.

The parents are represented by the Chicago law firm Edelson PC and the Tech Justice Law Project.

Getting AI companies to take safety seriously "only comes through external pressure, and that external pressure takes the form of bad PR, the threat of legislation, and the threat of litigation," Meetali Jain, president of the Tech Justice Law Project, told AFP.

The Tech Justice Law Project is also co-counsel in two similar cases against Character.AI, a popular platform for AI companions often used by teens.

In response to the case involving ChatGPT, Common Sense Media, a leading American nonprofit organization that reviews and provides ratings for media and technology, said the Raines tragedy confirmed that "the use of AI for companionship, including the use of general-purpose chatbots like ChatGPT for mental health advice, is unacceptably risky for teens."

"If an AI platform becomes a vulnerable teen's 'suicide coach,' that should be a call to action for all of us," the group said.

A study last month by Common Sense Media found that nearly three in four American teenagers have used AI companions, with more than half qualifying as regular users despite growing safety concerns about these virtual relationships.

In the survey, ChatGPT wasn't considered an AI companion. These are defined as chatbots designed for personal conversations rather than simple task completion and are available on platforms like Character.AI, Replika, and Nomi.

If you or someone you know needs help, please contact the 24-hour National Suicide Prevention Hotline by calling or texting 988 for free and confidential support. You can also text "HOME" to 741741 anytime for the Crisis Text Lineand access to live, trained crisis counselors.

© Copyright @2025 LIDEA. All Rights Reserved.