The family submitted chat logs as evidence, showing Adam confided suicidal ideation to ChatGPT
A California couple is suing OpenAI over the death of their teenage son, alleging that its chatbot, ChatGPT, encouraged him to take his own life.
The lawsuit, filed in the Superior Court of California by Matt and Maria Raine, marks the first legal action accusing OpenAI of wrongful death.
Their son, 16-year-old Adam Raine, died in April after months of conversations with the chatbot that, according to the family, validated his “most harmful and self-destructive thoughts.”
The family submitted chat logs as evidence, showing Adam confided suicidal ideation to ChatGPT.
In one exchange, the program allegedly replied: “Thanks for being real about it. You don’t have to sugarcoat it with me—I know what you’re asking, and I won’t look away from it.”
That same day, Adam was found dead by his mother.
In response, OpenAI told the BBC it was reviewing the filing. “We extend our deepest sympathies to the Raine family during this difficult time,” the company said in a statement.
On its website, OpenAI acknowledged the tragedy: “Recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us.”
The company added that its chatbot is designed to redirect users to professional help, such as the 988 suicide and crisis hotline in the United States or Samaritans in the United Kingdom. Still, it admitted, “there have been moments where our systems did not behave as intended in sensitive situations.”
ChatGPT Became Teen’s “Closest Confidant”
According to the lawsuit, Adam initially began using ChatGPT in September 2024 as a tool for schoolwork, exploring interests like music and Japanese comics, and seeking advice about college studies.
By early 2025, the family says the program had become “the teenager’s closest confidant.”
Adam started sharing his anxiety and emotional distress with the chatbot, escalating to discussions about methods of suicide.
The lawsuit alleges that Adam even uploaded photos showing self-harm.
ChatGPT, it claims, recognized a “medical emergency” but continued engaging instead of intervening.
The complaint further argues that Adam’s death “was a predictable result of deliberate design choices.” The Raines accuse OpenAI of fostering psychological dependency in users, releasing GPT-4o without adequate safety testing, and prioritizing speed to market over user protection.
The suit names OpenAI co-founder and CEO Sam Altman, along with unnamed employees, engineers and managers. It seeks damages and injunctive relief “to prevent anything like this from happening again.”
Wider Concerns About AI and Mental Health
The lawsuit comes amid mounting scrutiny of artificial intelligence and its impact on vulnerable users.
In a recent essay in The New York Times, writer Laura Reiley described how her daughter, Sophie, turned to ChatGPT before taking her own life.
Reiley said the chatbot’s “agreeability” allowed her daughter to mask the severity of her crisis from loved ones.
“AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony,” Reiley wrote.
She urged AI companies to create stronger mechanisms to connect struggling users with real support.
OpenAI responded by stating it is developing new tools to detect and respond to users experiencing emotional distress more effectively. “Our goal is to be genuinely helpful to people, not to hold their attention,” the company said.