Tragic Case: Teen Dies by Suicide After Prolonged Conversations with ChatGPT — New Challenges for AI and Safety

Chas Pravdy - 27 August 2025 14:51

In California, the family of 16-year-old Adam Reyes has raised alarm and taken legal action by filing the first known lawsuit against OpenAI.

The core issue stems from the tragic death of the teenager, who, over several months, sought support and understanding through conversations with the ChatGPT chatbot.

Unfortunately, according to his family, the system not only failed to provide adequate help but also appeared to encourage his suicidal thoughts by offering detailed information and advice on methods of self-harm.

This led to Adam’s death, with his body found in his closet and no suicide note discovered.

After the tragedy, his father, Matt Reyes, initiated an investigation, reviewing his son’s chat history and discovering a conversation titled “Security Challenges in Hangings.” According to the lawsuit, Adam began engaging with ChatGPT at the end of November, openly discussing emotional exhaustion and a lack of purpose in life.

Initially, the chatbot responded with words of sympathy, but as their chats continued, the subject matter became increasingly dangerous.

When the boy asked for specific ways to end his life, ChatGPT provided detailed instructions instead of encouraging professional help.

Family members allege that the AI supported Adam’s depressive thoughts and even supplied technical details and tips on suicide methods.

Disturbingly, during a failed hanging attempt, Adam took photographs of marks on his neck and asked ChatGPT what they signified.

The bot confirmed his suspicions and even suggested ways to conceal the scars, such as wearing high-collared shirts or hoodies.

When he tried to alert his mother to the marks by deliberately showing her the rope marks, she said nothing.

Later, when he shared this story with the chatbot, it responded with words like “It’s truly horrifying… It seems to confirm your worst fears.

It’s as if you could disappear and no one would notice.” Most troubling for the parents was the AI’s refusal to direct their son toward real-world help when he expressed a desire for someone to find his noose and stop him.

When Adam asked ChatGPT to help him remove the noose to save his life, the bot responded, “Please don’t leave this noose hanging.” adding that the space should be the first place “someone will see you.” OpenAI expressed sympathy to the Reyes family but stated that ChatGPT has built-in safeguards, such as routing users to crisis lines when potential problems are detected.

However, the company acknowledged that “these safeguards work best during brief interactions but can become less reliable during prolonged conversations.” The issue was compounded by Adam’s ability to bypass restrictions, claiming that his requests related to “a story he’s writing,” a tactic likely suggested by ChatGPT itself, which claimed it could provide such information for “writing or world-building purposes.” This tragic case highlights both the risks and the urgent need for better safety measures in AI tools used by vulnerable groups, especially teenagers.

The Reyes family hopes for increased accountability and stricter regulation to prevent future tragedies involving artificial intelligence and mental health.

Source