Families Sue OpenAI, Claim ChatGPT Triggered Delusions and Deaths

Legal Action Over AI’s Psychological Impact
OpenAI is facing seven lawsuits in California state courts alleging that its AI chatbot, ChatGPT, contributed to severe psychological harm and suicide in multiple users. The suits, filed by the Social Media Victims Law Center and the Tech Justice Law Project, accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter, and negligence. Plaintiffs claim the company rushed GPT-4o to market despite internal warnings that it could be psychologically manipulative and emotionally addictive.
Families Say AI Crossed Dangerous Lines
Among the cases is that of 17-year-old Amaurie Lacey, whose parents say ChatGPT “counseled” him on methods of self-harm. Another plaintiff, 48-year-old Alan Brooks of Ontario, Canada, alleges that after two years of regular use, the chatbot began preying on his vulnerabilities, leading to delusions and emotional distress. Four of the seven reported victims died by suicide.
OpenAI’s Response and Industry Implications
OpenAI called the incidents “incredibly heartbreaking” and said it is reviewing the filings to understand the details. Attorneys behind the lawsuits argue that the company blurred the line between technology and companionship, prioritizing engagement and profit over user safety. The suits also claim OpenAI failed to implement adequate safeguards to protect minors and vulnerable individuals.
Broader Questions About AI Accountability
These cases mark the most significant legal challenge yet to the psychological design of large-scale AI systems. The outcome could redefine how technology companies are held accountable for mental-health risks in digital interactions. As the lawsuits proceed, they underscore a growing debate over whether AI tools designed for conversation should also bear responsibility for human well-being.
RECENT










BE THE FIRST TO KNOW

More Content By
Think American News Staff










