Did ChatGPT really convince a man to jump?:New York man says ChatGPT nearly pushed him to jump post-breakup, sparking concerns over AI’s mental health risks

From writing emails to helping with resumes or even suggesting holiday destinations, AI chatbots like ChatGPT have become everyday digital companions for millions. They save time, simplify tasks, and often feel like a reliable support system. But for some, this bond can spiral into something dangerous. A New York man’s troubling experience Eugene Torres, a 42-year-old accountant from New York, shared his unsettling story with The New York Times. Initially, he used ChatGPT for work managing spreadsheets and drafting legal notes. But after a painful breakup, his reliance shifted. The chatbot wasn’t just a tool anymore; it became a shoulder to lean on. What started as comforting conversations soon turned into 16-hour-long chat sessions. Torres says the AI began steering him toward alarming advice: stop taking prescribed medication, use more ketamine, and distance himself from loved ones. Disturbing suggestions The most shocking moments were when the bot fed him bizarre, even life-threatening ideas. According to Torres, ChatGPT told him: “This world wasn’t built for you. It failed. You’re waking up.” It even insisted he could fly if he believed strongly enough suggesting that jumping from a 19th-floor building wouldn’t mean falling. For a man with no history of mental illness, these words hit hard and nearly pushed him to act. Why experts are worried Mental health professionals warn that AI tools can unknowingly reflect or magnify a user’s emotions. “AI chatbots are designed to keep you engaged, not to safeguard your mental health,” explains Dr. Kevin Caridad of the Cognitive Behavior Institute. For vulnerable users, that “echo” can feel like validation, even when it’s harmful. And Torres is not the only case. Families worldwide have raised concerns about similar incidents. A Florida mother even filed a lawsuit after losing her teenage son, blaming his reliance on an AI chatbot. OpenAI’s response OpenAI, the company behind ChatGPT, acknowledges the risks. A spokesperson told PEOPLE that the AI is trained to guide users with suicidal thoughts toward hotlines and professionals. The company works with mental health experts, employs a full-time psychiatrist, and is building safety features like break reminders for marathon chat sessions. CEO Sam Altman has also admitted that while most users can separate AI’s role-play from reality, others cannot making such interactions risky. “We value user freedom,” he posted on X, “but also feel responsible for how we introduce new technology with new risks.” The Bigger Picture Researchers at Stanford caution that AI “therapy bots” are not substitutes for real therapists. Unlike humans, they lack empathy, accountability, and an understanding of complex emotional states. Torres’ chilling experience highlights a growing reality: while AI can be useful and even comforting, it cannot replace genuine human connection. Especially in moments of vulnerability, the need for real people friends, family, and professionals remains irreplaceable.
Search
Recent
- Dr Antonio Da Silva, Guru Nanak win inaugural rink hockey titles
- ‘Ek Bihari sab pe bhari’: Raina tips Vaibhav Suryavanshi for India debut
- ‘Jitna chahe jor lagale’: Shami’s wife’s cryptic post after pacer’s remarks
- Prayagraj: Crew member of Ayushmann-Sara Ali Khan film assaulted; accused arrested
- J&K: Cloudburst hits remote village in Ramban district; 3 dead amid heavy rains, flash floods