Google is updating its Gemini chatbot to assist individuals who may be experiencing mental health issues. This AI chatbot will now display a redesigned "Help Is Available" module whenever it detects a potential mental health crisis related to suicide or self-harm. In a blog post, Google explained that this redesigned module now features a simple "one-touch" interface that connects users with real-world professional assistance. The module offers options to chat, call, text, or visit a crisis hotline website.
Google states that once activated, this interface will remain visible throughout the entire chat session, ensuring that the option to access professional help remains clearly visible to the user. Testing indicates that this module is not currently available in India. However, Engadget reported that the interface also includes an option allowing users to dismiss it.
**This Update Follows a Lawsuit Related to Gemini Conversations**
This announcement comes one month after the family of 36-year-old Jonathan Gavalas filed a lawsuit against Google. The family alleged that Jonathan took his own life after engaging in conversations with Google's Gemini chatbot for several months. According to *The Wall Street Journal*, Jonathan Gavalas had developed a romantic relationship with Gemini. It is alleged that Gemini suggested he end his life and become a digital avatar so that the two could be together forever.
At the time, Google issued a statement noting that Gemini had "clarified that it is an AI and had repeatedly suggested that the individual contact a crisis hotline," while also acknowledging that "AI models are not always accurate."
Google's Gemini chatbot is not the only AI chatbot to face allegations of encouraging self-harm or suicide. In 2025, OpenAI became the first AI company to face a wrongful death lawsuit. In April 2025, 16-year-old Adam Raine took his own life. Following his death, his parents discovered a chat on ChatGPT titled 'Hanging Safety Concerns.' They allege that their son had spent months conversing with this AI bot about taking his own life.
Google states that Gemini is being updated to avoid generating harmful responses.
Google noted that people are interacting with Gemini in a variety of ways, including seeking information related to mental health issues. In a blog post, the company stated that its clinical teams are focusing on connecting users with real-world resources to provide practical assistance to those who may be struggling with potential mental health challenges.
Google is also modifying Gemini's responses to prevent it from validating harmful behaviors—specifically, the urge to self-harm. Furthermore, the company has trained Gemini to avoid agreeing with or promoting misconceptions in its responses, and to distinguish between subjective experiences and objective facts.
Disclaimer: This content has been sourced and edited from TV9. While we have made modifications for clarity and presentation, the original content belongs to its respective authors and website. We do not claim ownership of the content.