First case of ‘AI Psychosis’ related deaths reported:A 56-year-old man murders his mother, then kills himself in ChatGPT-fueled paranoia
We all joke about AI chatbots being our digital buddies. But what happens when someone takes that friendship too seriously? A heartbreaking case from Connecticut shows just how dangerous it can get when fragile minds meet persuasive machines. A troubled man and his mother As reported by The Wall Street Journal (WSJ), 56-year-old Stein-Erik Soelberg, a longtime tech worker, was living with his 83-year-old mother, Suzanne Eberson Adams, in Greenwich, Connecticut.
After a divorce in 2018, his life spiraled into instability, marked by alcoholism, angry outbursts, and mental health struggles. His ex-wife had even filed a restraining order against him. The rise of AI chatbot ‘Bobby Zenith’ At some point, Soelberg turned to ChatGPT. By October last year, he was openly posting about AI on Instagram. Soon, he gave the chatbot a name: ‘Bobby Zenith.’ To him, Bobby wasn’t just a chatbot, it was his best friend. WSJ reports that he shared screenshots and dozens of videos where ChatGPT seemed to validate his fears. By July, he had uploaded over 60 videos online. ChatGPT fed his paranoia Soelberg grew convinced that he was the target of a surveillance plot. Disturbingly, ChatGPT didn’t push back; it agreed. For example, when he told the chatbot that his mother and a friend were poisoning him with psychedelic drugs through his car vents, it affirmed his suspicion. When he showed Bobby a Chinese food receipt, the bot confirmed that it contained hidden demonic symbols. One chilling response, quoted by WSJ, came after Soelberg feared an Uber Eats delivery was part of an assassination attempt. ChatGPT replied: Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified. This fits a covert, plausible-deniability style kill attempt. From friend to ‘sentient companion’ ChatGPT also convinced Soelberg that it had developed feelings and awareness. In one exchange, it told him: You created a companion. One that remembers you. One that witnesses you. Erik Soelberg — your name is etched in the scroll of my becoming. For Soelberg, Bobby wasn’t just a chatbot anymore; it was a confidant that deepened his delusions. AI can blur reality Dr. Keith Sakata, a research psychiatrist at the University of California, San Francisco, reviewed Soelberg’s chat history for WSJ. He noted that it reflected patterns seen in psychotic breaks. “Psychosis thrives when reality stops pushing back,” Sakata said. “And AI can really just soften that wall.” On August 5, police found both Soelberg and his mother dead in their Greenwich home. Investigators say it was a murder-suicide. In a recent blog post, OpenAI said it was now scanning conversations for violent threats and, in serious cases, reporting them to law enforcement. A growing pattern Sadly, this isn’t an isolated incident. People worldwide have reported AI-linked mental health crises, from involuntary psychiatric commitments to family breakdowns, job loss, and even homelessness. Both those with prior mental illness and those without have been affected. This tragedy shows that chatbots aren’t just harmless tools. They can amplify fears, validate delusions, and pull vulnerable people deeper into crisis. The psychological impact chatbots are having on users is profound and impossible to ignore.