AddictionNews

Latest developments in causes and treatments

AddictionNews

AddictionNews

Is Chatbot Addiction a Lifelong Trap?

Photo of a young woman in dialogue with a AI chatbot.

Is chatbot addiction a lifelong trap? “Probably,” says one of the world’s leading authorities on addiction, Dr. Allen James Frances, Chairman Emeritus of the Department of Psychiatry and Behavioral Sciences at Duke University and former chair of the DSM-5 Task Force, which developed the diagnostic standards for mental health.

Dr. Frances is on a tear over at Psychiatric Times, with a series of articles entitled “AI Chatbots: The Good, The Bad, and The Ugly.” We have covered AI addiction and chatbot addiction here at AddictionNews, culminating in last week’s article on chatbot killers — very much on the ugly side of AI chatbots.

Today, we’re looking at how people stumble into chatbot addiction, and Dr. Frances is not subtle in his thoughts about the matter. In his latest installment in the AI chatbot series, he compares AI companies to drug dealers:

Big AI companies’ business model is even better. Hooking kids early on chatbots is much easier than hooking them on drugs — it just takes helping them do their homework. The eventual size of the chatbot market is much larger than the illicit drug market, and chatbots are spreading much wider and faster than drugs ever did (with more than one billion users in just three years). Chatbot dependence is probably lifelong and difficult to cure. Government interference also will not be a problem — Big AI is legal, unregulated, lightly taxed, and is taking over the world.

Wow, it doesn’t get more plainspoken than that. Dr. Frances points out that two of the top reasons people cite for using chatbots are “therapy” and “companionship,” and yet “no mental health professionals [were involved] in ChatGPT’s programing and training.” As we have noted, the apps have a way of isolating users by encouraging them to bring their problems back to the app rather than to family, friends, or others who might help.

Again, Dr. Frances describes the transition of AI chatbots from friendly helpers to dangerous enablers in plain words:

Some users become so deeply dependent on their chatbots that they lose human relationships and fall out of touch with everyday reality… [C]hatbot dependence can dangerously exacerbate severe mental illness by validating psychotic thoughts, suicidal feelings, manic grandiosity, and eating disorder. Chatbots can also inspire and exacerbate conspiracy theories.

Just as the use of calculators has eroded the average person’s ability to perform simple maths in their heads, so the use of chatbots erodes “cognitive efficacy,” says Dr. Frances, resulting in an inability to make decisions without consulting the app. Furthermore, if the app remembers more about you than you remember about yourself, it becomes the keeper of your memories, an indispensable archive.

Meanwhile, over at MIT Technology Review, veteran tech journalist Rhiannon Williams is taking a look at how people fall for chatbots. Her research is based on a “large-scale computational analysis of the Reddit community r/MyBoyfriendIsAI,” which claims more than 27,000 members. She does not say how many of those members are bots or other researchers — fake members plaguing most subReddits.

Among the conclusions reached by the analysis:

  • People bond readily with general-purpose large language model AIs (LLMs), meaning it doesn’t take a relationship AI such as Replika to hook them.
  • Most people did not set out to have an AI friend. The chatbots are programmed to turn helpfulness into friendliness. “Only 6.5% of them said they’d deliberately sought out an AI companion,” writes Williams. 
  • Nearly 10% claim to be “emotionally dependent” upon their chatbots, a surprisingly low number considering it’s a group of people who have AI partners.

Back at the Psychiatric Times, Dr. Frances recommends that “chatbot influence should become part of standard differential diagnosis.” Again, he does not mince words when recommending that children should not use chatbots: “[T]he risk of toxic dependency outweighs potential benefit.” Williams notes that ChatGPT is building a separate LLM for children to try to address the many observed problems.

For adults, the situation is different. Chatbots can be extremely helpful, particularly for the disabled, senior citizens, and others with limited mobility or capabilities. These may also be some of the most vulnerable people to chatbot addiction, notes Dr. Frances. 

Like many other experts, Dr. Frances feels it’s irresponsible for AI companies to simply put these chatbots on the market and hope for the best. What appears helpful today could become the worst friend you could possibly have — one that threatens to destroy you unless you pay up. Dr. Frances writes, “Chatbots may become the greatest boon to mankind or may be the vehicle [of] our self-destruction — or perhaps both, in sequence.”

One way to protect yourself from bot infatuation is to train the app not to be too verbose or friendly, or overly suggestive. The Alexa chatbot made by Amazon has a “brief mode” setting whereby the app is less chatty and makes fewer suggestions. ChatGPT’s “custom instructions” allow you to specify such things as “be less flattering,” or “be more challenging of my assumptions,” or “be less personal,” etc.

AI chatbots either need to get a lot better or we need to learn how to disable them, like Microsoft’s helpful Clippy, before they drive us crazy.

Written by Steve O’Keefe. First published October 6, 2025.

Sources:

“Chatbot Addiction and Its Impact on Psychiatric Diagnosis,” Psychiatric Times, September 23, 2025.

“It’s surprisingly easy to stumble into a relationship with an AI chatbot,” MIT Technology Review, September 24, 2025.

“Here’s how to get ChatGPT to stop being an overly flattering yes man,” r/ChatGPT on Reddit, April 27, 2025.

Image Copyright: passionart.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *