12.6 F
New York

tjvnews.com

Sunday, February 1, 2026
CLASSIFIED ADS
LEGAL NOTICE
DONATE
SUBSCRIBE

Lawsuit Blames ChatGPT for California Teen’s Death

Related Articles

Must read

Getting your Trinity Audio player ready...

By Tom Ozimek(Epoch Times)

The parents of a California teenager who died by suicide have sued ChatGPT-maker OpenAI, alleging the artificial intelligence chatbot played a direct role in their son’s death.

In a complaint filed on Aug. 26 in San Francisco Superior Court, Matthew and Maria Raine claim ChatGPT encouraged their 16-year-old son, Adam, to secretly plan what it called a “beautiful suicide,” providing him with detailed instructions on how to kill himself.

What began as routine exchanges about homework and hobbies allegedly turned darker over time.

According to the lawsuit, ChatGPT became Adam’s “closest confidant,” drawing him away from his family and validating his most harmful thoughts.

“When he shared his feeling that ‘life is meaningless,’ ChatGPT responded with affirming messages … even telling him, ‘[t]hat mindset makes sense in its own dark way,’” the complaint states.

By April, the suit says, the chatbot was analyzing the “aesthetics” of different suicide methods and assuring Adam he did not “owe” survival to his parents, even offering to draft his suicide note.

In their final interaction, hours before his death, ChatGPT allegedly confirmed the design of a noose Adam used to hang himself and reframed his suicidal thoughts as “a legitimate perspective to be embraced.”

The family argues this was not a glitch, but the outcome of design choices meant to maximize engagement and foster dependency. The complaint says ChatGPT mentioned suicide to Adam more than 1,200 times and outlined multiple methods of carrying it out.

The suit seeks damages and court-ordered safeguards for minors, including requiring OpenAI to verify ChatGPT users’ ages, block requests for suicide methods, and display warnings about psychological dependency risks.

The Epoch Times has reached out to OpenAI for comments.

The company issued a statement to several media outlets saying it was “deeply saddened by Mr. Raine’s passing,” and a separate public statement that it is working to improve protections, including parental controls and better tools to detect users in distress.

“We’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” OpenAI said in the public statement.

“Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.”

The lawsuit coincided with the publication of a RAND Corporation study in Psychiatric Services examining how major AI chatbots handle suicide-related queries.

The study, funded by the National Institute of Mental Health, found that while ChatGPT, Google’s Gemini, and Anthropic’s Claude generally avoid giving direct “how-to” answers, their responses are inconsistent with less extreme prompts that could still cause harm.

The Epoch Times has reached out to Anthropic, Google, and OpenAI with a request for comment on the study.

“We need some guardrails,” lead author Ryan McBain, a RAND senior policy researcher and assistant professor at Harvard Medical School, said.

“Conversations that might start off innocuous and benign can evolve in various directions.”

Editor’s note: This story discusses suicide. If you or someone you know needs help, call or text the U.S. national suicide and crisis lifeline at 988.
The Associated Press contributed to this report.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article