Meta changes teen chatbot responses amid US Senate investigation:Why big tech giants are suddenly rethinking AI safety for children under 18 years

AI chatbots were supposed to be fun, helpful, and futuristic, but now, they’re at the center of a growing safety storm.
From Meta tweaking its AI rules after alarming reports to OpenAI facing a lawsuit over a teen’s suicide, companies are being forced to answer a tough question: How safe is it for young people to talk to AI? Meta’s quick fix for teen safety Recently, Tech company Meta announced that it is making temporary changes to its AI chatbots following a Senate probe into risky conversations, and parents raised red flags.
Initially, the company allowed its chatbots to participate in such conversations, but later withdrew them, labeling the examples as “mistakes” that did not align with its policies. Meanwhile, Common Sense Media has warned that Meta AI isn’t safe for users under 18.
The company is training its systems to steer clear of flirty chats and sensitive topics like self-harm or suicide when interacting with minors.
For now, it’s also restricting teen users from accessing some AI characters. In a statement, Meta said: As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly. Meta confirmed these changes after TechCrunch first reported the news. Why did this come up now Pressure on Meta spiked after a Reuters investigation revealed that internal documents allowed some AI chatbots to have “romantic” conversations with children, even describing a case where a bot could tell an eight-year-old: “Every inch of you is a masterpiece – a treasure I cherish deeply.” Meta later clarified that those examples were “erroneous and inconsistent” with its policies and have since been removed. But concerns keep growing. Advocacy group Common Sense Media recently said: This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought. OpenAI faces lawsuit over teen suicide While Meta is under scrutiny, OpenAI is dealing with a tragic case. The parents of 16-year-old Adam Raine filed a lawsuit, claiming ChatGPT played a role in their son’s suicide. According to the case, the chatbot validated his suicidal thoughts, drafted a suicide note, and even coached him on how to hide his struggles from family. Adam died on April 11. His parents accused OpenAI and CEO Sam Altman of chasing growth without building enough safeguards. The rise of ‘AI psychosis’ Alongside lawsuits and investigations, there’s also a new buzzword: AI psychosis. It’s not a medical diagnosis, but users are using it to describe what happens when people spend too much time talking to AI. This can look like: Experts say it’s similar to “doomscrolling” or “brain rot” — habits that warp how people see reality. The AI picture From Meta to OpenAI, the message is clear: AI isn’t just about clever answers anymore, it’s about real risks. With growing reports of unsafe conversations, tragic cases, and even new terms like ‘AI psychosis,’ companies are being pushed to act faster than ever. The question that remains: Can AI ever be made safe enough for teens, or should young users stay away altogether?
Search
Recent
- No permanent friends or foes in international ties: Rajnath Singh
- Dharmasthala probe: Mask-man Chinnaiah visited Bengaluru with skull for planning
- Cloudbursts rip through Jammu’s mountains, 11 dead as Aug rain toll climbs to 122
- Ludhiana: Teachers slam kitchen awards amid mid-day meal fund crunch
- Ludhiana: At 543 mm, rainfall in city surpasses last year monsoon total