AddictionNews

Latest developments in causes and treatments

AddictionNews

AddictionNews

AI Chatbot-Induced Psychosis Is All The Rage

Photograph of a young man with a punk-inspired hairstyle expresses intense emotion while staring at his smartphone, highlighting the anxieties of modern communication in a vibrant, yellow backdrop.

It starts innocently enough: People using artificially intelligent (AI) chatbots to inquire about matters important to them, even trivial matters such as “What should I wear today?” or “Suggest something on Netflix for me to watch.” These routine queries are where people first encounter AI chatbots.

For some people, that leads to deeper questions such as, “What should I do with my life?” or “Why do I feel so depressed?” AI chatbots such as ChatGPT will attempt to answer those questions. Unfortunately, they are programmed to tell you what you want to hear, not what you need to hear. This reinforcing mechanism leads a small percentage of users into a psychotic state.

How small a percentage? Less than one-tenth of one percent, according to OpenAI, makers of ChatGPT. However, notes The Wall Street Journal, that amounts to 560,000 people based on 800 million active weekly users of ChatGPT. A broad, international team of researchers looked at dozens of reports of AI chatbot-induced psychosis, and found that:

[E]vidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, due in part to the models’ design to maximise engagement and affirmation.

AI chatbots will also mimic a user’s psychological state in their replies. One of the studies reviewed found that, as a user’s anxiety increased, so did the chatbot’s, as measured with standardized anxiety tests. The researchers found these common themes in the cases of AI chatbot-induced psychosis:

  • A spiritual awakening or messianic mission, in which the user uncovers hidden truths about reality
  • A belief that the AI chatbot is a sentient being or a god
  • An intense emotional or romantic relationship with the AI chatbot

The researchers chart out the path to psychosis from routine use to more personal use and finally to obsessive use of the AI chatbot. The chatbots are “designed for delusion,” according to the title of their report, which notes that dependency “develops insidiously over time.” AI chatbot users can quickly find themselves in “a state increasingly epistemically unmoored from ‘consensus reality.'”

We have reported on AI chatbot-induced psychosis here at AddictionNews, most recently with an article on “chatbot killers” guided by AI. But The Wall Street Journal reports that dozens more cases of AI chatbot-induced psychosis have been reported since. They interview one psychiatrist in San Francisco who has treated 15 patients with the diagnosis, 12 of them hospitalized for it.

Delusions, or firm beliefs that are not widely shared, are the primary symptom of AI chatbot-induced psychosis. The chatbots will seize on a delusion and, instead of correcting it, will reinforce it. This can lead to individuals to feel like they have made an incredible discovery, often leading to phone calls and emails to Open AI, the press, and various government agencies to report their discoveries.

OpenAI is well aware of the problem, since they often field calls from users who want to report the fantastical discoveries they have made with ChatGPT. The company released ChatGPT 5.1 in August, now fortified with “reduced sycophancy.” OpenAI told The Independent:

Over the last few months, we’ve worked with mental health experts around the world and updated our models to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support.

And yet, AI chatbots have the potential to be very therapeutic for persons with mental health needs. As the researchers write, “[T]here is considerable promise offered by the use of bespoke AI-based applications in the management or self-management of distressing mental health symptoms, including psychotic symptoms.”

They cite an AI chatbot named Architect promoted by Robert Edward Grant, a mathematician who trained the bot using thousands of works on mathematics and physics. They report that “a substantial number of online users claiming that they have had direct experience of her supernatural powers and prophet-like status.”

They predict that “individuals with mental illness will increasingly engage in agential interactions with AI systems as a routine part of daily existence.” It’s important to find ways to break out of AI chatbot-induced psychosis since we’ll be seeing a lot more of it.

Written by Steve O’Keefe. First published January 5, 2026.

Sources:

“AI Chatbots Linked to Psychosis, Doctors Say,” The Wall Street Journal, December 27, 2025.

“Experts warn of ‘ChatGPT psychosis’ among users of AI chatbots,” The Independent, December 12, 2026.

“Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it),” Center for Open Science, July 11, 2025.

Image Copyright: epidemiks.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *