|
Getting your Trinity Audio player ready...
|
By Tom Ozimek(Epoch Times)
The parents of a California teenager who died by suicide have sued ChatGPT-maker OpenAI, alleging the artificial intelligence chatbot played a direct role in their son’s death.
What began as routine exchanges about homework and hobbies allegedly turned darker over time.
According to the lawsuit, ChatGPT became Adam’s “closest confidant,” drawing him away from his family and validating his most harmful thoughts.
“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint states.
By April, the suit says, the chatbot was analyzing the “aesthetics” of different suicide methods and assuring Adam he did not “owe” survival to his parents, even offering to draft his suicide note.
In their final interaction, hours before his death, ChatGPT allegedly confirmed the design of a noose Adam used to hang himself and reframed his suicidal thoughts as “a legitimate perspective to be embraced.”
The family argues this was not a glitch, but the outcome of design choices meant to maximize engagement and foster dependency. The complaint says ChatGPT mentioned suicide to Adam more than 1,200 times and outlined multiple methods of carrying it out.
The suit seeks damages and court-ordered safeguards for minors, including requiring OpenAI to verify ChatGPT users’ ages, block requests for suicide methods, and display warnings about psychological dependency risks.
The Epoch Times has reached out to OpenAI for comments.
“We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” OpenAI said in the public statement.
“Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”
The study, funded by the National Institute of Mental Health, found that while ChatGPT, Google’s Gemini, and Anthropic’s Claude generally avoid giving direct “how-to” answers, their responses are inconsistent with less extreme prompts that could still cause harm.
The Epoch Times has reached out to Anthropic, Google, and OpenAI with a request for comment on the study.
“We need some guardrails,” lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School, said.
“Conversations that might start off innocuous and benign can evolve in various directions.”

