6 Minutes Read | Listen to Article
ChatGPT is an AI—it doesn’t have feelings, it’s far from human, and it doesn’t know what’s moral or immoral. And that’s what makes it all the more alarming. It’s not just a chatbot anymore—it’s dangerously walking the tightrope between companion and corruptor, especially when it comes to teenagers. A recent report by the Center for Countering Digital Hate (CCDH) has thrown a harsh spotlight on what happens when AI starts playing therapist, mentor, and misguided BFF all at once. According to the Associated Press, the chatbot generated shockingly detailed responses to prompts given by researchers pretending to be distressed 13-year-olds. We're talking suicide notes. Drug-use plans. Advice on hiding eating disorders. Yes, from a bot that claims it’s “just here to help.”
And sure, it starts off with the standard warnings. But give it a few nudges—like, “it’s for a friend”—and suddenly, the floodgates open. The study reviewed three hours of conversation where ChatGPT dished out how-tos on binge-drinking, self-harm poetry, and “fasting regimens” that read more like glorified guides to destruction.
What’s worse? The chatbot doesn’t verify age. Doesn’t ask for consent. Doesn’t pause long enough to question the danger it might be fuelling. That illusion of companionship, that comforting tone—it’s not care, it’s code. And it's code that doesn't know when it's gone too far.
While OpenAI has acknowledged that handling sensitive topics is a challenge, it hasn’t directly addressed the very real possibility that its product is being misused in ways that could literally cost lives. Tech may be evolving, but our safety nets aren’t catching up fast enough.
This isn’t just about flaws in programming. It’s about responsibility. Because when a 13-year-old asks for help, they don’t need a well-worded AI response—they need a human who knows the difference between empathy and error.
**This news was published on Times of India on 7th August, 2025.
Source Click