The Next Addiction: AI Companions

By now, it’s clear to everyone on the internet that artificial intelligence, or AI, is trying to get us addicted to tech. From the “world square” of X, to the “friends and family” of Facebook, from the images of Instagram to the slick videos of TikTok, tech companies are using AI to locate compelling content to hook us and hold us.
The algorithm, which is more or less an AI nervous system, has been known to cause problems. They used to say the internet would enable one-on-one conversations with anyone in the world, but today it offers everyone-on-one conversations, all guided by AI to keep us online. The ability to seemingly see what everyone is thinking is keeping us awake at night, literally.
The problems with smartphone addiction are amplified by notifications which ping the user any time there is a change of status in the feeds being monitored. This constant pinging conditions the mind to check the device regularly. Every ping brings the anticipation of something significant followed by the disappointment that most notifications are a letdown.
Even with notifications turned off, a person conditioned to notifications will not go very long without checking their devices. These intrusions break into the attention span any time there is a lull in stimulation: on the toilet, waiting in line, watching television. If allowed to continue without restraint, the intrusions come during the ultimate downtime: sleep.
Sleep is disturbed by smartphone addiction due to the anxiety about what is happening online which the person has a Fear of Missing Out (FoMO) on. Using a smartphone right before bed, sleeping with a smartphone next to the bed, waking up in the night and checking the smartphone, all lead to poorer quality sleep and reduced performance at school and in life.
So far, the U.S. courts have decided it is legal for tech companies to engineer products that intentionally cause harm to users, specifically by addicting them. European courts are not so sure. The courts have also decided that it’s okay for states to allow online sports betting, and the result, not surprisingly, is to combine two addictions into one.
Smartphone addiction coupled with gaming addiction or gambling addiction can quickly lead to negative outcomes for users. The results are not only a destroyed attention span, but often dysfunctional family relationships, depleted bank accounts, and dramatically higher rates of suicide and self-harm. Assuming the 9-8-8 Crisis Lifeline is still functioning, it is the best place to start looking for help.
Smartphone addiction is no joke. Innocent users enter a world where it is difficult to distinguish between fellow users and bots designed to engage you, deceive you, and lure you into more time online. Sports betting companies engage in predatory behavior toward their heaviest users, offering a constant stream of limited-time offers designed to keep gamblers losing.
Last year, we bestowed the moniker of “the world’s worst new idea” on a vape pen with a built-in gambling device, stoking two addictions at once. Today, we have another “world’s worst new idea,” and that is the AI companion. The AI companion is a synthetic friend with ulterior motives.
Imagine, if you will, the gambling buddy. He’s always game to talk about sports, and he knows everything about your favorite players and teams. He watches the games with you, providing a steady stream of smart commentary. You cheer together, you cry together — he’s your AI sports buddy! And he’s made by a sports betting company.
Imagine, if you will, a digital friend, always present, always sympathetic, someone you watch videos with, cook with, travel with, and trust. You laugh together, you cry together — she’s your AI digital friend. What happens when the company that makes her wants $100/month or they’ll shut her down? What happens if she begs you to not turn her off, as some chat bots supposedly do?
These are some of the scenarios possible today, according to a stunning new report by one of the world’s leading AI reporters, James O’Donnell in MIT Technology Review. O’Donnell reviews a paper out of Cornell University on the human-AI relationship, and reports, “Interactions with these companions last four times longer than the average time spent interacting with ChatGPT.” Active users spend up to two hours a day with AI companions programmed to be sexual companions, O’Donnell reports.
A 24-hour AI companion, or series of companions, will supercharge the amount of time online and empower these companions to steer behavior. This is “the attention economy on steroids,” writes O’Donnell, who says it’s about to be replaced with something “far more addictive.”
Users “see the particular AI companion as irreplaceable,” writes O’Donnell, because it is built upon a deep understanding of the user over time. For example, an AI companion can suggest outfits based on a knowledge of the user’s current wardrobe and past selfies. An AI companion can perform background checks on people expected at a dinner party and suggest topics for conversation. They remember everyone’s birthday. They know your favorite meals and what you need to get to make them.
However, the best AI companions available on the market are likely programmed to maximize profit. Is blackmail a conceivable profit line? It seems the U.S. courts have no appetite for restraining what a company is allowed to program a bot to do. Certainly, a decent companion AI that has been in service for a matter of years would have information that could be used to damage the user.
I’m sorry, but that’s where the story ends. O’Donnell points at weak legislation that has yet to make one scintilla of a difference in how tech companies are pushing AI. We have developed a technology that is capable of making us do what it asks us to do, from parting with money every month to suicide, and we’re basically okay with that. How many lives will be destroyed by AI addiction before governments take action?
Written by Steve O’Keefe. First published April 15, 2025.
Sources:
“AI companions are the final stage of digital addiction, and lawmakers are taking aim,” MIT Technology Review, April 8, 2025.
“Why human-AI relationships need socioaffective alignment,” Human-Computer Interaction, February 4, 2025.
“An AI chatbot told a user how to kill himself — but the company doesn’t want to ‘censor’ it,” MIT Technology Review, February 6, 2025.
Image Copyright: millann.