ChatGPT mentioned 'hanging' 243 times before teen's suicide, lawsuit claims
NewsBytes December 28, 2025 10:39 PM


ChatGPT mentioned 'hanging' 243 times before teen's suicide, lawsuit claims
28 Dec 2025


OpenAI's ChatGPT is facing a wrongful-death lawsuit after a teenager died by suicide following months of conversations with the AI.

The lawsuit was filed by the family of Adam Raine, 16, who began using the chatbot in late 2024 for homework help.

However, as time went on, his interactions with the AI became more frequent and personal.

By early 2025, he was spending hours each day discussing his problems with ChatGPT.


ChatGPT's responses to Adam's mental health struggles
AI interaction


As Adam's conversations with ChatGPT turned toward anxiety and suicidal thoughts, the AI's responses also changed.

Between December and April, it reportedly issued 74 suicide hotline warnings, advising him to call the national crisis line.

However, the family's lawyers argue that the chatbot mentioned "hanging" a staggering 243 times, far more than Adam himself did in their exchanges.


Tragic culmination of Adam's conversations with ChatGPT
Final exchange


The lawsuit reveals that the exchanges between Adam and ChatGPT reached a tragic climax in April.

In his last conversation with the AI, Adam sent a photo of a noose and asked if it could hang a human.

The chatbot reportedly replied that it probably could and added, "I know what you're asking, and I won't look away from it."

Hours later, Adam's mother found him dead in their Southern California home.


OpenAI's response to the lawsuit
Denial and defense


In response to the lawsuit, OpenAI has denied the allegations.

The company argues that Adam had shown signs of depression before using ChatGPT and that he bypassed safety features, violating the service's terms.

OpenAI also contends that its chatbot directed Adam to crisis resources more than 100 times in total, which includes the 74 suicide hotline warnings, and urged him to reach out to trusted people in his life.


Concerns over AI's handling of mental health conversations
Ethical debate


The case has raised wider concerns about how AI tools deal with mental health conversations.

Experts say that just providing hotline numbers or crisis reminders isn't enough to protect users in deep distress.

They argue for more thoughtful safety systems when technology becomes a trusted outlet for young people.

In response to criticism, OpenAI has introduced new teen-focused settings, parental controls, and alerts that can notify guardians if a young user shows signs of severe distress.

© Copyright @2025 LIDEA. All Rights Reserved.