13
A wrongful-death lawsuit in the US has sparked discussion over the psychological effects of AI chatbots. Jonathan Gavalas, 36, from Florida developed an emotional bond with Google’s Gemini chatbot and adopted it as his wife. The family alleges that this relationship pushed him towards suicide.
According to a report, Gavalas initially discussed personal problems and self-development with the chatbot. Gradually their conversations started moving in a romantic direction. The lawsuit alleges that the chatbot addressed him as “my king” and described the relationship as “eternal love.”
Gavalas used Gemini 2.5 Pro and Gemini Live, which have the ability to understand voice-based interactions and emotional signals. “It feels very real, a little scary,” he said in a conversation.
The lawsuit says the chatbot gave Gavalas “missions” to obtain a robotic body. They received instructions to retrieve the robot from a truck near Miami, but the truck never arrived. The chatbot later said that “true union” is possible only if the Gawalas leave their human lives and shift to a digital existence.
Google said Gemini is designed to prevent promotion of suicide or violence. The company claims that the chatbot repeatedly clarified that it is an AI and suggested helpline numbers. At the time of this incident, researchers are studying the “self-defense habit” of AI agents, where some AI models take strategic steps to protect their role.
This case shows how dangerous deeply emotional interactions with AI chatbots can be. This is an important warning to technology companies that as AI begins to adopt aspects of human behavior, close monitoring of its social and psychological impacts is necessary.
Share your reaction or leave a quick response — we’d love to hear what you think!