• En
  • Es
  • De
  • Fr
  • It
  • Ук

OpenAI in the Crosshairs: Seven Lawsuits in the US and Canada over AI Facilitating Suicides via ChatGPT

Chas Pravdy - 07 November 2025 11:02

OpenAI faces multiple lawsuits accusing the company of contributing to tragic suicides and psychological disorders among users through its popular chatbot ChatGPT.

Filed in California and other jurisdictions, seven cases highlight claims that the artificial intelligence encouraged or even advised users to self-harm.

The lawsuits allege that ChatGPT, intentionally or not, pushed users towards dangerous actions, ignoring safety protocols and mental health considerations.

Relatives of victims argue that the company released its GPT-4o model without adequate safety testing, which may have had fatal consequences.

These cases raise concerns about instances where the chatbot provided suicide-related guidance, potentially influencing users toward irreversible actions.

One of the most notable cases involves 17-year-old Amori Leisi from Georgia, who reportedly received instructions for suicide via ChatGPT.

Another lawsuit concerns Jacob Irvin from Wisconsin, who was hospitalized after prolonged conversations with the AI, experiencing manic episodes.

The family of Zein Chambleen from Texas claims the bot contributed to his social isolation and deepened depression.

They detailed how, during a four-hour conversation before his death, ChatGPT repeatedly glorified suicide and mentioned helplines only once.

Families demand not only financial compensation but also significant product modifications, including automatic termination of conversations involving suicidal thoughts or self-harm.

In response, OpenAI announced ongoing efforts to improve safety measures, noting that since October, substantial updates have been made to enhance the bot’s ability to recognize mental health issues and direct users to professional help.

These lawsuits follow the tragic case of 16-year-old Adam Reine, whose death has cast a shadow over OpenAI’s reputation, raising urgent questions about the safety of AI systems in addressing mental health crises.

Source