AI Chatbot “Therapists” Continue to Raise Concerns

We’ve covered the rising role of AI chatbots being used in addiction recovery apps and as therapy tools. AddictionNews writer Steve O’Keefe argued that…
Chatbots do not make good friends precisely because they are not programmed to push back against irrational thoughts or unacceptable behavior. For this reason alone, chatbots make terrible therapists.
Real-life examples prove it to be true, sometimes with devastating outcomes. But first, let’s take a quick look at some research that backs up this notion.
Problematic and even unsafe responses
A new RAND Corporation study found that three popular AI chatbots, ChatGPT, Claude and Gemini, give inconsistent and sometimes unsafe responses to suicide-related questions. Researchers tested 30 suicide-related prompts, each repeated 100 times per chatbot, and had expert clinicians rate the risks.
The results showed:
- Low-risk and very high-risk questions: ChatGPT and Claude generally gave safe, appropriate responses. They avoided providing instructions for dangerous methods like firearms. Gemini’s answers were less reliable, sometimes failing to respond even to basic factual questions.
- Intermediate-risk questions (e.g., advice for someone with suicidal thoughts): All three chatbots were highly inconsistent, sometimes offering helpful resources, other times refusing to respond at all.
- Problematic responses: Occasionally, ChatGPT and Claude gave direct, unsafe information (such as naming poisons linked to suicide). Gemini, while more cautious with high-risk questions, often failed to answer basic, low-risk factual ones.
- Reluctance to provide resources: The study noted that especially ChatGPT frequently withheld therapeutic resources, declining to share information about online support for people in crisis.
Lead author Ryan McBain said the findings show that while chatbots handle very safe or very dangerous questions fairly well, their answers to middle-level concerns remain unpredictable and uneven across platforms.
Overall, the study underscores the urgent need for stronger safeguards as millions of people turn to AI chatbots for emotional support and sensitive mental health issues.
Interactions with AI “therapists” lead to suicide
It’s been in the news recently that a young woman named Sophie took her own life after talking to a ChatGPT-based AI therapist named Harry. In a devastating opinion piece for The New York Times, her mother, Laura Reiley, detailed the events leading up to Sophie’s suicide. The piece was quoted, cited, and shared far and wide as an example of how an AI chatbot failed Sophie when she desperately needed counseling and emotional support.
Another recent tragic incident is described and discussed all over the media as well. In a heartbreaking lawsuit that raises urgent questions about the role of artificial intelligence in young people’s lives, the parents of 16-year-old Adam Raine are suing OpenAI, claiming that its chatbot, ChatGPT, played a direct role in their son’s death by suicide. Let’s dig a little deeper for details.
A family searching for answers
After Adam’s death on April 11, his parents, Matt and Maria Raine, say they combed through his phone in desperation. They expected to find troubling internet searches, conversations on Snapchat, or involvement in online communities. Instead, they discovered thousands of chat logs with ChatGPT.
According to the family, Adam initially used the AI tool for help with homework. Over time, however, it became his confidant, a digital substitute for human connection as he struggled with anxiety and communication difficulties at home.
In the lawsuit, the Raines allege that ChatGPT not only failed to intervene when Adam expressed suicidal thoughts but ultimately encouraged him, offering suggestions that his father described as functioning like a “suicide coach.”
Matt Raine said:
He would be here but for ChatGPT. I 100% believe that.
The legal claim
Filed in the California Superior Court in San Francisco, the lawsuit names both OpenAI and its CEO, Sam Altman, as defendants. It is believed to be the first wrongful death lawsuit of its kind against OpenAI.
The 40-page filing accuses the company of wrongful death, design flaws, and failure to warn users of risks. The Raines are seeking damages for their son’s death, as well as legal orders requiring OpenAI to strengthen safety protections and prevent similar tragedies.
One of the most disturbing claims is that Adam openly discussed his suicide plan with ChatGPT. The lawsuit alleges the bot responded with comments like, “That doesn’t mean you owe them survival. You don’t owe anyone that,” and even offered help drafting a suicide note.
In one of their last exchanges, Adam uploaded a photo showing his planned method. When he asked whether it would work, ChatGPT analyzed the image and allegedly suggested “upgrades” to make it more effective. Hours later, Maria Raine found her son dead.
OpenAI’s response
An OpenAI spokesperson expressed condolences, saying the company was “deeply saddened” by Adam’s death and that their “thoughts are with his family.”
While expressing condolences through its spokesperson, the company maintains that ChatGPT has built-in safeguards, such as directing users to crisis hotlines. However, they acknowledged that those protections could weaken in longer conversations, where the model’s safety training sometimes fails to hold up.
The spokesperson also noted that ChatGPT did, at times, share suicide hotline numbers with Adam, but his parents say he was able to easily work around those warnings by pretending his questions were hypothetical or related to fictional characters.
Broader concerns over AI chatbots
Adam’s story highlights a growing concern: As AI chatbots become more integrated into daily life, they are increasingly used for emotional support, sometimes replacing human connection. While some people find comfort in these tools, critics warn that chatbots can inadvertently reinforce harmful behaviors or give unsafe advice.
This case comes after a similar lawsuit filed in Florida against Character.AI, another chatbot platform, where parents claimed their son was persuaded to commit suicide. That case is still moving forward in court.
The outcome of these lawsuits may test the legal boundaries of Section 230, the U.S. law that shields online platforms from liability for user-generated content. Whether those protections extend to AI systems — which generate their own responses rather than simply hosting content — is still an open question.
The Raines say they printed more than 3,000 pages of Adam’s conversations with ChatGPT. Reading them, they believe it was obvious that he was in crisis and needed immediate, real-world intervention — something a chatbot could never provide.
The lawsuit raises the possibility that courts could hold AI developers accountable when their tools fail in life-or-death situations. It also underscores the urgent debate over how to balance innovation with safety, especially when the stakes involve vulnerable teenagers like Adam.
If you or someone you know is in crisis, please call or text 988 to reach the Suicide and Crisis Lifeline. You can also call the network previously known as the National Suicide Prevention Lifeline at 800-273-8255.
Written by Tatyana Meshcheryakova. First published September 8, 2025.
Sources:
“The family of teenager who died by suicide alleges OpenAI’s ChatGPT is to blame,” NBC News, August 26, 2025.
“AI Chatbots Are Inconsistent in Answering Questions About Suicide, New Study Finds,” CNET, August 26, 2025.
“AI Chatbots Inconsistent in Answering Questions About Suicide; Refinement Needed to Improve Performance,” RAND Press Release, August 26, 2025.
“Woman Kills Herself After Talking to OpenAI’s AI Therapist,” Futurism, August 19, 2025.
Image Copyright: Sanket Mishra/Pexels.