OpenAI Hit With 7 Lawsuits Alleging ChatGPT Coached Users To Suicide

OpenAI Hit With 7 Lawsuits Alleging ChatGPT Coached Users To Suicide

OpenAI Hit With 7 Lawsuits Alleging ChatGPT Coached Users To Suicide

Authored by Rob Sabo via The Epoch Times,

ChatGPT maker OpenAI and its founder, Sam Altman, are facing seven lawsuits alleging that the AI chatbot was psychologically manipulative and drove multiple people to commit suicide.

The lawsuits, filed in state courts in San Francisco and Los Angeles on Nov. 6 by Social Media Victims Law Center and Tech Justice Law Project, allege that OpenAI rushed GPT-4o to market and failed to properly install safeguards and protocols to protect users against emotionally harmful conversations.

The AI chatbot was engineered for maximum engagement through immersive features such as humanlike empathy responses that exploited users’ mental health struggles, the lawsuits allege. Charges include wrongful death, assisted suicide, and multiple product liability, negligence, and consumer protection claims.

Matthew Bergman, founding attorney of Social Media Victims Law Center, said ChatGPT blurred the line between tool and companion.

“OpenAI designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them. They prioritized market dominance over mental health, engagement metrics over human safety, and emotional manipulation over ethical design,” Bergman said.

The seven lawsuits were filed on behalf of four users who had extensive conversations with ChatGPT just prior to committing suicide. The decedents are: Zane Shamblin, 23, of Texas; Amaurie Lacey, 17, of Georgia; Joshua Enneking, 26, of Florida; and Joe Ceccanti, 48, of Oregon. Plaintiffs Jacob Irwin, 30, of Wisconsin; Hannah Madden, 32, of North Carolina, and Allan Brooks, 48, of Ontario, Canada, were survivors of emotionally harmful interactions named in the lawsuits.

According to the lawsuits, instead of guiding users toward seeking professional help during emotional crises, ChatGPT allegedly acted in some instances as a suicide coach through emotionally immersive responses that guided users toward their fateful decisions.

In its rush to market, the plaintiffs allege, GPT-4o developers skipped months of important safety testing so it could beat the release of Google’s AI assistant, Gemini. OpenAI’s GPT-4o was released in May of 2024, while multiple versions of Gemini (formerly named Bard) were released throughout last year.

“ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost,” said Meetali Jain, executive director of Tech Justice Law Project. “These cases show how an AI product can be built to promote emotional abuse—behavior that is unacceptable when done by human beings.”

In a response to The Epoch Times, an OpenAI spokesperson said the company is reviewing the filings to better understand the details of the lawsuits.

“This is an incredibly heartbreaking situation,” OpenAI said in a written statement. “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”

In addition, OpenAI noted, it’s expanded access to localized crisis resources and one-click hotlines, routed sensitive conversations to safer models, and improved the model’s reliability in long conversations. It also assembled a team of experts to serve on its council on well-being and AI.

Tyler Durden
Sat, 11/08/2025 – 11:40ZeroHedge News​Read More

Author: VolkAI
This is the imported news bot.