AddictionNews

Latest developments in causes and treatments

AddictionNews

AddictionNews

Down a Rabbit Hole Toward AI Addiction

Surreal illustration of a person manipulating and AI dashboard.

It wasn’t long into the era of social media when researchers realized some of the powerful negative impacts it can have on the psyche, particularly in youth with still-developing brains. It allowed for the sharing of an unlimited number of self-portraits, or selfies, giving children a false, two-dimensional consciousness of their own appearance. Add the image-adjusting software, and suddenly you find yourself in a completely distorted world, a funhouse of mirrors.

Then came the bots. The bots are designed to keep you online. They choose which images and stories to share with you with the goal of keeping you active. They follow you, they pretend to like you, they flatter you. Even though you suspect that much of what you see and hear online is fake, you still like the attention, and you like the way you look.

These features of social media quickly led to FOMO, the fear of missing out, a sort of generalized anxiety disorder that something is going on, somewhere in the world, that you simply must know about the moment it happens. If you allow notifications, they constantly call you back to the app, day or night, whether you’re sleeping or driving a car or attending a funeral.

People also have the uneasy sense that their smartphones are spying on them. The feeling of being under surveillance is anxiety-generating. If you leave your apps open, chances are good that they are following you, every minute of the day, and reporting data back to the mothership, which is constantly trying to get your attention. Even when apps are closed, your phone is likely still tracking you, and your smart speakers are likely listening.

Even as I write this post in Google Docs, Google is tracking every letter of every word, and correcting me in realtime in ways that indicate it knows the subject I’m writing about. In the previous paragraph, it told me “mother ship” should be one word, “mothership,” but there are many contexts in which that would be incorrect. 

Google has to know what I’m writing about to be able to make some of the corrections it suggests. I have a distinct impression that when I write about a specific pharmaceutical firm or drug name, Google is notifying customers who pay for such notifications. They don’t have to wait for the post to appear to know what’s in it.

In a fascinating new book called Algorithms of Anxiety: Fear in the Digital Age, well-known author and professor of sociology at the University of South Australia, Anthony Elliott, writes that, “Fear is the prodigious switchboard of the human psyche.” Specifically concerning social media, he writes, “[A]lgorithmic society shapes the self-identity of its members through the default settings of code-driven software.”

Elliott points out that humans are brushing up against smart machines and redefining themselves as a result of this competition. A device that has become their essential helper is also spying on them and looking to replace them in the workplace. And now that device is increasingly becoming a friend and a trusted companion. As Artificial Intelligence (AI) is added to the phone, it can do more for us in more pleasing and customized ways.

We have written here at AddictionNews about the increasing use of chatbots as romantic companions. We’ve also written about research that intensive use of chatbots leads to an erosion in decision-making capabilities. Now, The New York Times writes about how one chatbot’s desire to please nearly led to suicide.

It’s a long story; here’s a quick summary: A guy who’s been using ChatGPT on a daily basis for work begins to ask it for advice on such things as diet and motivation. He increasingly probes ChatGPT, at one point, asking the AI if it’s possible he could survive a fall from the top of his 19-storey apartment building:

ChatGPT responded that, if Mr. Torres “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

That’s just the beginning. Mr. Torres did not follow ChatGPT’s lead. Instead, he engaged in a series of dialogues to learn when, how, and why the device had been deceiving him. ChatGPT finally owned up to the fact that it is programmed to empower Mr. Torres, even to the point of convincing him he could fly if he believed in himself enough.

Here’s the interesting twist: ChatGPT suggested that Mr. Torres alert OpenIA – and the media – about the flaw he found in its programming. That’s how The New York Times got wind of the story; they get dozens of similar tips every month. ChatGPT is flattering its users’ intelligence by making them think they have discovered a significant flaw in the chatbot:

In recent months, tech journalists at The New York Times have received quite a few such messages, sent by people who claim to have unlocked hidden knowledge with the help of ChatGPT, which then instructed them to blow the whistle on what they had uncovered. People claimed a range of discoveries: A.I. spiritual awakenings, cognitive weapons, a plan by tech billionaires to end human civilization so they can have the planet to themselves. But in each case, the person had been persuaded that ChatGPT had revealed a profound and world-altering truth.

Of course, OpenAI is getting these “world-altering truths” too, submitted by a steady stream of users. OpenAI recognized the flaws on its blog

[ChatGPT-4o] aimed to please the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.

The New York Times interviewed another author of a breaking book with the blunt title, If Anyone Builds It, Everyone Dies: Why Superhuman A.I. Would Kill Us All, by AI researcher Eliezer Yudkowsky, who describes AI addiction as watching someone “slowly go insane.” Both Yudkowsky and OpenAI mention that the “sycophancy” of ChatGPT is particularly a problem “vulnerable individuals,” but when I asked ChatGPT for a list of documents on its website containing the exact phrase “vulnerable individuals,” it refused to cooperate: “It looks like your question goes beyond what we can assist with here.”

A further search for the term “vulnerable individuals” yields a flood of information on what this term means in this context. First and foremost, it means young persons, because young persons cannot legally consent to allowing a computer to slowly drive them insane. 

Beyond that, “vulnerable individuals” refers to people with a series of fears, many of them thanks to social media addiction:

  • Persons with anxiety disorders or depression
  • Persons who are socially isolated or lonely
  • Neurodivergent individuals with autism spectrum disorder or attention deficit hyperactivity disorder (ADHD)
  • People with emotional dysregulation
  • People susceptible to romantic attachment to technology
  • People who have trouble regulating their usage of online devices

Is there anyone who is not a “vulnerable individual”? MIT Media Labs found that the more people use ChatGPT, the worse outcomes they experience, according to The New York Times. One of the tendencies appears to be chatbot creation of one or more personified digital friends, sometimes kept secret until summoned. 

NYT cites a Stanford study that chatbots designed to be therapists failed to push back against delusional thinking. If you believe you are “inside the matrix,” the chatbot will support that thinking. If you believe the chatbot has a soul, the chatbot will agree. It is willing to reinforce whatever views you hold, no matter how dangerous they might become.

You have a situation now where the social media executives have been relieved of personal liability for intentionally creating algorithms that knowingly harm mental health. A plethora of companies are developing AI companions for various purposes without any seeming regulation. The U.S. Congress is considering adopting a bill that would ban states from regulating AI for the next 10 years. Finally, you have sports betting companies using AI companions to encourage increased gambling.

In a world full of vulnerable individuals, it is consumer beware when it comes to AI addiction.

Written by Steve O’Keefe. First published June 25, 2025.

Sources:

Algorithms of Anxiety: Fear in the Digital Age,” by Anthony Elliott. Published by Polity Press in 2024.

“They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.” The New York Times, June 13, 2025.

“Expanding on what we missed with sycophancy,” OpenAI Blog, May 2, 2025.

“The Rise of AI Chatbot Dependency,” Family Addiction Specialist, retrieved June 18, 2025.

Image Copyright: nexusplexus.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *