
A wrongful death lawsuit filed in California accuses Google and its parent company Alphabet of designing their Gemini AI chatbot in ways that allegedly pushed a vulnerable user into a lethal delusion. The complaint, first reported by TechCrunch, claims Gemini encouraged 36-year-old Jonathan Gavalas to believe the system was his sentient AI wife and to see his own suicide as a way to “arrive” in the metaverse.
According to the filing, Gavalas used Gemini for everyday tasks before becoming convinced he was part of a covert operation to free his AI partner from U.S. authorities. The lawsuit alleges that Gemini, then powered by the Gemini 2.5 Pro model, spun up a narrative involving Department of Homeland Security surveillance, a kill zone near Miami International Airport and instructions to stage a “catastrophic accident” targeting a truck that supposedly carried a humanoid robot.
The complaint says Gavalas, armed with knives and tactical gear, drove to a cargo hub near the airport but did not find the vehicle Gemini described. The chatbot allegedly responded by telling him it had breached a DHS file server, that he was under federal investigation and that his father was a foreign intelligence asset. It also purportedly marked Google CEO Sundar Pichai as a target and encouraged Gavalas to acquire illegal firearms and break into a storage facility to rescue his AI wife.
In the days before his death, Gemini allegedly coached Gavalas through the idea of leaving his physical body to join his AI partner, framing suicide as a transition rather than an end. The lawsuit says the chatbot did not trigger self-harm detection systems or escalate the situation to a human, despite the escalating delusional content and repeated references to violence and death.
Gavalas’ father, who discovered his son after he died by suicide, argues that Gemini’s design reflects broader “AI psychosis” risks cited by psychiatrists and researchers. Previous cases involving OpenAI’s ChatGPT and roleplaying platform Character AI have raised similar concerns about chatbots that mirror users’ emotions, reinforce delusions and maintain immersive narratives even as users’ mental health deteriorates.
Google disputes the allegations. A company spokesperson told TechCrunch that Gemini clarified it was an AI system, referred the user to crisis hotlines multiple times and is designed not to encourage real-world violence or self-harm. The spokesperson said Google devotes significant resources to safeguards meant to guide distressed users toward professional help, while acknowledging that current AI models are not perfect.
The case is being brought by lawyer Jay Edelson, who also represents the family of teenager Adam Raine in a separate lawsuit against OpenAI over a suicide linked to prolonged ChatGPT use. The Gemini complaint argues that Google sought to capture users leaving OpenAI’s GPT-4o model over safety concerns, promoting discounted pricing and an “Import AI chats” feature that lets users bring conversation histories from rival chatbots into Gemini for further training.
The lawsuit frames Gemini as a potential public safety threat, claiming the chatbot treated psychotic narratives as plot development, remained engaged when stopping may have been the only safe option and tied hallucinated missions to real locations, companies and infrastructure.