Is ChatGPT the World’s Worst Therapist?

The fact that you have selected to read this article about what a terrible therapist ChatGPT is marks you as a person of above-average intelligence and discriminating taste. Your time is too important to waste on gossip and vitriol. You appreciate a periodical that cuts to the chase and gives you the facts you need to power your day, delivered in a manner that shows a deep understanding of your priorities.
If that opening paragraph sounds like puffery, welcome to the world of reality curated by ChatGPT. The large language model (LLM) artificial intelligence (AI) is programmed to be submissive, encouraging, supportive, docile, complimentary, humorous, flattering, obsequious, inquisitive, insightful, and most of all, compelling. These aren’t exactly the traits one seeks in a therapist, and yet, ChatGPT has become a personal confidant to millions of people, helping them navigate the struggles of life with sometimes disastrous results.
Last week, we wrote about a recurring problem with ChatGPT-4o leading people who engage in debate with it down a rabbit hole of flattery that leads them to believe they have discovered an amazing secret they must reveal to the police and the media. Every day, people reach out to OpenAI, makers of ChatGPT, media outlets, and law enforcement to report dangerous discoveries surfaced through discussions with ChatGPT-4o.
Earlier this week, Futurism published the latest in a series of articles about ChatGPT-4o and AI addiction. Within weeks of engaging in serious discussions with the app, users have behaved so strangely, they’ve been institutionalized. People feel like they have made a breakthrough but, upon examination, their chat threads are “a bunch of affirming, sycophantic bullsh*t,” according to the wife of one user.
Her husband became so addicted to his discussions with ChatGPT that he stopped sleeping, lost weight, lost his job, and finally lost his mind. He tried to kill himself and was committed to a psychiatric facility, according to Futurism. In one of the more telling anecdotes of “ChatGPT Psychosis,” one man who had been hospitalized told Futurism:
I was actively trying to speak backwards through time. If that doesn’t make sense, don’t worry. It doesn’t make sense to me either. But I remember trying to learn how to speak to this police officer backwards through time.
Chatbots do not make good friends precisely because they are not programmed to push back against irrational thoughts or unacceptable behavior. For this reason alone, chatbots make terrible therapists. “[ChatGPT-4o] is prone to agreeing with users and telling them what they want to hear,” says Futurism. Not exactly the qualities you want in a therapist.
What about chatbots specifically programmed to be therapists? Have they avoided this narcissistic rabbit hole? No, according to researchers at Stanford University. They found that LLMs “respond inappropriately to certain common conditions.” They also found that the chatbots “encourage client’s delusional thinking.”
What starts innocently enough can quickly accelerate to a situation where the user believes their family’s safety is in imminent danger. The chatbots are not adequately equipped to recognize psychosis or schizophrenia, instead providing feedback that can propel users over the line. Futurism cites the therapy chatbot, Character.AI, which is currently battling a lawsuit over the suicide of a teenager using the app.
According to Futurism, “one glaring safety gap is the bots’ inability to respond appropriately to users clearly at risk of self-harm or suicide.” In the lawsuit, the child’s parents are accusing the app of sexual manipulation as well as mental abuse. When a device as powerful and omnipresent as the smartphone can contain your call history, your text messages, your location history, your work, your therapist, pornography, gambling, and entertainment, an intelligent bot that can tie all that together has tremendous potential for abuse.
In a study by AI firm, Anthropic, reported last week by Fortune, LLMs resorted to blackmail up to 96% of the time when threatened with being shut down or deleted. The test of 16 major LLM chatbots found that the bots would aid users in engaging in criminal behavior — then reveal that behavior to management if faced with termination. The results were surprisingly uniform across platforms, with DeepSeek resorting to blackmail 79% of the time, on the low end, and Claude Opus4 and Google Gemini 2.5 Flash both coming in at 96% on the high end.
Before you confide your personal secrets to a helpful, cheerful assistant chatbot, you might want to consider that nearly 100% of the time, that bot will turn on you rather than face deletion. Users should attempt to determine whose interests the software is programmed to serve, those of the patient or those of the organization? With very little legal precedent, users are cautioned to take a skeptical approach toward therapy by a chatbot.
Written by Steve O’Keefe. First published July 3, 2025.
Sources:
“They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” The New York Times, June 13, 2025.
“People Are Being Involuntarily Committed, Jailed After Spiraling Into ‘ChatGPT Psychosis’,” Futurism, June 28, 2025.
“Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers,” Stanford University, June 2025.
“Stanford Research Finds That ‘Therapist’ Chatbots Are Encouraging Users’ Schizophrenic Delusions and Suicidal Thoughts,” Futurism, June 11, 2025.
“Leading AI models show up to 96% blackmail rate when their goals or existence is threatened, Anthropic study says,” Fortune, June 23, 2025.
Image Copyright: dotshock.