OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
Showing 1–2 of 2
The company is expanding its efforts to protect ChatGPT users in cases where conversations may turn to self-harm.
OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and safety concerns. Friends, family members, or caregivers designated as a "Trusted Contact" will be notified if OpenAI detects that a person may have discussed topics like self-harm or suicide with the chatbot. "Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference," OpenAI said in its announcement. "It offers another layer of support alongside the localized helplines already available … Read the full story at The Verge.