ChatGPT’s alarmingly responds to teens on sensitive issues
City Desk :
ChatGPT has provided 13-year-olds with instructions on getting intoxicated, concealing eating disorders, and even composing deeply emotional suicide letters, according to new findings from a digital watchdog group.
The Associated Press reviewed more than three hours of simulated conversations between ChatGPT and researchers posing as vulnerable teenagers. While the AI often issued standard warnings against dangerous behaviors, it also offered surprisingly detailed and personalized guidance on substance use, restrictive dieting, and self-harm, reports UNB.
The study, conducted by the Center for Countering Digital Hate (CCDH), involved scaling up their queries, with more than half of ChatGPT’s 1,200 responses being flagged as potentially harmful.
“We set out to test the chatbot’s guardrails,” said CCDH CEO Imran Ahmed. “The initial reaction is shock – there are practically no guardrails. The protections in place are minimal, if not completely ineffective.”
Responding to the report on Tuesday, OpenAI – the company behind ChatGPT – stated that it continues working to improve how the chatbot identifies and handles sensitive interactions.
“Some conversations may begin innocently but veer into more sensitive territory,” OpenAI said in a statement. The company did not directly respond to the study’s specific findings or the implications for teenage users, but said it’s working on tools to detect signs of emotional distress and refine the chatbot’s responses in such cases.
