OpenAI adds trusted contact feature to ChatGPT for at-risk conversations
OpenAI launches 'Trusted Contact', a feature that lets users designate someone to receive alerts if ChatGPT detects conversations involving self-harm or suicidal ideation.
According to TechCrunch, OpenAI unveiled a new feature for ChatGPT this week called Trusted Contact: a mechanism that allows users to designate a trusted person—a family member, friend, or professional—to receive a notification if the system detects that a conversation could be related to self-harm or suicidal ideation. The announcement came on May 7, 2026, and forms part of the company's ongoing expansion of safety guardrails around user wellbeing.
This isn't the first move of its kind in the industry. Some mental health platforms have had similar protocols for years. However, it's notable that one of the world's most widely used general-purpose chatbots is incorporating this layer of protection natively.
How it works
According to available information, the flow is fairly straightforward: users configure their trusted contact in advance within their account settings. If ChatGPT identifies risk signals in the conversation—the model makes that determination in real time—it can trigger a notification to that designated person. The technical details about what threshold triggers an alert, what exact information is shared, and how user consent is managed have not been fully specified by OpenAI at the time of publication.
That last point is relevant: the balance between protective intervention and user privacy is delicate. An automatic notification to a third party inherently breaks conversation confidentiality. OpenAI will need to clearly justify under what exact conditions this happens and what real control users have over the process.
Why this matters
AI conversational assistants have been used for some time by people in emotionally vulnerable situations, often because they perceive chat anonymity as a safer space than talking to another person. This creates a specific responsibility for developers of these systems: they cannot ignore that their models are, in fact, becoming a first point of contact for mental health crises.
The usual response so far has been to display messages with crisis helpline numbers (the "passive safety" model). Trusted Contact represents a step toward active safety: the system doesn't just show a help number but attempts to connect users with their actual support network. It's a significant shift in approach.
For teams building products on top of LLM APIs—including integrations with Claude through MCP or custom agents—this OpenAI decision is also a signal of broader trends: regulators in Europe and the United States are increasingly paying attention to how AI systems handle sensitive conversations. Those without a clear policy in this area before regulation arrives will have to build one in a rush afterward.
Who should pay attention
- End users of ChatGPT who want to configure an additional safety net, especially if the assistant is part of their daily routine.
- Developers and product managers building on OpenAI or Anthropic APIs who need reference points on how to implement wellbeing safeguards.
- Compliance and legal teams at companies deploying internal chatbots or customer support systems: this move could become a de facto industry standard for what counts as "best practice".
- HCI researchers and AI ethics scholars studying the intersection between conversational systems and mental health.
We appreciate that OpenAI makes this type of functionality visible rather than managing it opaquely in the backend. That said, the devil will be in the implementation details: if the consent mechanism isn't well designed, a feature intended to protect could end up creating situations of greater vulnerability for some users.
Sources
Read next
xAI and Anthropic: A Deal That Raises More Questions Than Answers
TechCrunch analyzes with skepticism the agreement between xAI and Anthropic and what it could mean for SpaceX. We review what is known and what remains unclear.
Wispr Flow Bets on Hinglish to Drive Growth in India
Wispr Flow reports accelerated user growth in India after launching support for Hinglish, the Hindi-English code-mix spoken by hundreds of millions.
TechCrunch's AI Glossary: Right on Time, Not Too Soon
TechCrunch published a guide to key AI terms for those who've spent months nodding along without fully grasping them. We break down what it covers and who actually needs it.