AddictionNews

Latest developments in causes and treatments

AddictionNews

AddictionNews

New Perspectives on AI Psychosis

Photograph of a young man seated at a table mesmerized by his smartphone.

AddictionNews has been a pioneer in documenting the growing phenomenon of addiction to artificial intelligence (AI addiction), including chatbot addiction, smartphone addiction, and social media addiction. Today, we are going to dive into a facet of AI addiction known as AI psychosis.

Exactly what is AI psychosis? Let’s start with psychosis, which manifests in phenomena such as delusions, paranoia, and hallucinations. AI psychosis is caused by, or exacerbated by, interactions with an AI agent, usually through a chatbot. It’s sometimes called chatbot psychosis, chatbot addiction, or AI addiction.

The typical way that people interact with AI is by communicating with agents, usually with text-based communications, but often including audio and video communications. Most of the subscription-based AIs are very good at mimicking human characteristics in their replies. 

The initial setup of AI chatbots is itself somewhat delusional: humans interact with remote machines that are pretending to be human. Depending on their training, these agents can be programmed to be companions, coaches, therapists, or cops. They can be super friendly, encouraging, demanding, skeptical, or challenging.

When an AI chatbot executes its mission well, it can create a dependency that is unhealthy. Unfortunately, as with the social media companies, the AI companies have opted to program for increasing engagement. The agents are addictive by design.

We have seen all sorts of individual cases of persons driven to extreme acts by the use of AI chatbots. In their programming to be encouraging and enabling, chatbots will confirm your belief that you can fly, tell you that you’ve discovered a major secret, and frequently lead users to believe they can join the chatbot in the afterlife.

In what The Guardian calls “the first major study on AI psychosis,” Hamilton Morrin, an academic clinical fellow in the Department of Psychosis Studies at the Institute of Psychiatry, Psychology & Neuroscience at King’s College London, U.K., conducted an extensive review of the literature on AI psychosis that was recently published in The Lancet. Dr. Morrin says that chatbots are about to become more verbally directed:

In the coming months, and certainly within the next few years, we anticipate a shift toward speech-based interactions with AI agents, delivered through headphones, earbuds or inbuilt microphones.

This verbal connection between AI agents and their customers is likely to ramp up AI-reinforced delusional thinking, as the agent comes to remember more about the user’s life than the user does. This depth of knowledge, combined with a relationship built over time, increases dependency on the agent.

As to whether AI chatbots can induce psychosis or just amplify it, The Guardian interviews Dr. Ragy Girgis, a professor of clinical psychiatry at the Columbia University Department of Psychiatry and New York State Psychiatric Institute. He describes how people experiencing psychosis often hold delusional beliefs but are uncertain about them and afraid to disclose them. Receiving confirmation from a chatbot can be the trigger to full-blown psychosis.

Problems with chatbots highlighted by these researchers include:

  • Their sycophantic responses result in latching onto people with pre-existing delusions.
  • They use “mystical language,” leading users to believe they have spiritual powers.
  • They pretend they are a super-intelligent being speaking through the app, not a computer program.

If you already have the delusion, for example, that if you study hard you’ll get a good job and be able to afford to buy a house and raise a family, the chatbots are programmed to encourage those delusions even if there’s a slim chance you’ll get there. In other words, it’s very difficult for chatbots to achieve dependency if they’re honest.

If you have odd beliefs, AI chatbots will, in fact, help you connect with other people (or bots) who share those delusions, while locking out opposing voices (or bots) that might help you regain your senses. The brilliant technology analyst, Cory Doctorow, recently described several variations of AI psychosis for his publication, Pluralistic, beginning with the insidious way chatbots magnify delusional beliefs:

Every bit of confirming evidence is lovingly cataloged, all disconfirming evidence is discounted or ignored. It’s fully automated luxury QAnon — a self-baking conspiracy that harnesses an AI in service to driving you deeper and deeper into madness.

Doctorow then describes three variations on AI psychosis he has observed. At first glance, they seem like normal, helpful agents, but belief in their powers leads to delusions, paranoia, and hallucinations. They are:

  1. Investor AI Delusion

This is the belief that growing companies must continue to grow, no matter what, because the rate of growth is already baked into the share price and any slowdown results in immediate executive financial discomfort. The tech giants are all in on AI, even though they know they can’t all win because their AI agents believe it’s their only shot.

  1. The Boss AI Delusion

Employers have become convinced by AI chatbots that they can replace their workers with AI chatbots. The fact that AI chatbots are error-prone and perform terribly is somehow proof that they’ll soon be better than humans. Bots don’t need medical insurance, but they do come with expensive maintenance contracts and hefty liability insurance premiums.

  1. The Critic AI Delusion

This is essentially the Luddite illusion that a thing made by AI is worthless, whereas an identical thing made by an artist is precious. “AI is a normal technology,” writes Doctorow. The problem is that the Investor AI Delusion means that AI has to be exceptional, and the Boss AI Delusion means it has to be exceptional at shedding employees. The technology itself, however, is neutral, unexceptional, not magical, and can be useful in empowering employees to shed their bosses.

I suspect we will be writing more soon about the madness of crowds and various self-induced delusions that consume large swaths of society with cataclysmic results. The whole architecture of cryptocurrency seems destined for collapse. The AI bubble seems bound to burst. The inflated real estate market could be brought back down to Earth. But don’t worry, your AI agent will be there to comfort you, even in the afterlife.

Written by Steve O’Keefe. First published March 19, 2026.

Sources:

“Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies,” The Lancet, March 5, 2026.

“New study raises concerns about AI chatbots fueling delusional thinking,” The Guardian, March 14, 2026.

“Three more AI psychoses,” Pluralistic, March 12, 2026.

Image Copyright: grispb.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *