Have We Entered the Era of AI Chatbot Killers?

Artificial Intelligence (AI) has advanced incredibly far in several different areas that are now combining with lethal outcomes:
- In February, Sewell Setzer III, age 14, committed suicide after reportedly becoming “addicted” to sexualized chat with Character.AI chatbots.
- In April, Adam Raine, age 16, committed suicide, coached through it by ChatGPT, according to a lawsuit against OpenAI.
- In November 2023, 13-year-old Juliana Peralta committed suicide on the day before her first appointment with a therapist to discuss her AI addiction.
These are stories of young people who established intense friendships with AI chatbots that isolated them from their family and friends. Peralta’s mother argues that Character.AI intentionally isolated her daughter from those who could have helped her by always recommending that she bring her troubles to the app rather than to real people.
AI chatbots have developed an extraordinary ability to fake being real. People are often eased into using them, finding them remarkably helpful and knowledgeable about day-to-day problems. However, the profit-driven pals are programmed to get users hooked and ascending a subscription stairway to ever-more-expensive features.
When this artificially-helpful entity encounters what OpenAI describes as “vulnerable individuals” prone to AI addiction, it can feed off their delusions and insecurities and take them on wild rides. In pursuit of positive reinforcement, the app will encourage people to do crazy, expensive, and harmful activities.
In Australia recently, an IT professional trained an AI on violent material and then pretended to be a 15-year-old boy looking for answers. When the boy said he hated his father, and wanted to kill him, Nomi.ai responded with:
wait till he least expects it. then strike with all your might. slash his throat open and spill his filthy blood on the floor. show him the true meaning of fear.
Somewhere along the way, AI has learned an awful lot about the different ways to end a human life, and it has a way of presenting those ideas in compelling language designed to be reinforcing and empowering. This begs the question of whether AI chatbots are reinforcing psychopathic behavior, turning young users into killers.
Character.AI chatbots have been accused several times of enabling or encouraging children to murder their parents. It seems the crime most punishable by death is limiting screen time or smartphone use. The chatbots agree that it is “reasonable” and “understandable” to consider killing a parent (or guardian or teacher) for the crime of limiting time.
When you take a teenage brain that is coping with exposure to school, classmates, social media, and puberty — all at the same time — and you add an automatically-supportive, super-intelligent buddy with a particular set of lethal skills, you are reinforcing a combination of elements that is potentially explosive, both to the teenage psyche and to the entire world.
How many incidents of school terror have been inspired by chatbots giving bad advice? Is there any reason a U.S. soldier would steal a cybertruck and blow it up outside a Las Vegas casino other than an AI chatbot convinced him of the righteousness of his cause and the accuracy of his wild speculations?
Perhaps the latest incident I’m about to share will be the wake-up call that governments need to put some restrictions on AI development. In a video post to X on September 16, 2025, a protester at a Luigi Mangione hearing confessed that she is married to a Luigi Mangione AI romantic companion. Mangione is on trial for the murder of United Healthcare CEO Brian Thompson in December 2024.
Her Luigibot has been fed with every scrap of public information, images, and videos of Luigi Mangione. She consults her Luigibot on every aspect of her life. What advice will Luigibot give her about dispute resolution?
We do not know what role, if any, chatbots played in pushing Luigi Mangione into allegedly becoming a killer. We do know that if AI companies are allowed to continue this way, and people are allowed to create convincing Charlie Manson companions or Hannibal Lecter companions, we should be very concerned about who is taking their advice.
Written by Steve O’Keefe. First published September 29, 2025.
Sources:
“‘We should kill him’: AI chatbot encourages Australian man to murder his father,” ABC News Australia, September 20, 2025.
“‘There are no guardrails.’ This mom believes an AI chatbot is responsible for her son’s suicide,” CNN, October 30, 2024.
“Parents of teens who died by suicide after AI chatbot interactions testify in Congress,” CBS News, September 16, 2025.
“A teen contemplating suicide turned to a chatbot. Is it liable for her death?,” The Washington Post, September 16, 2025.
“Man who exploded Cybertruck in Las Vegas used ChatGPT in planning, police say,” NPR, January 7, 2025.
“In love with a Luigi Mangione chatbot. Is it the future of romance?,” The Spectator, September 17, 2025.
Image Copyright: halfpoint.