May 8, 2026
OpenAI Introduces ‘Trusted Contact’ Safety Feature for ChatGPT Users

OpenAI Introduces ‘Trusted Contact’ Safety Feature for ChatGPT Users

OpenAI Introduces ‘Trusted Contact’ Safety Feature for ChatGPT Users

OpenAI has announced a new ChatGPT safety tool called Trusted Contact, aimed at helping users who may be experiencing emotional distress or discussing self-harm during conversations with the AI chatbot.

The feature allows adult users to nominate a trusted person — such as a family member, partner or close friend — who can be notified if OpenAI’s systems detect conversations involving potential self-harm or suicide risk. The move represents the company’s latest effort to address growing concerns over AI safety and mental health after facing mounting criticism and legal challenges linked to harmful chatbot interactions.

According to OpenAI, the system is designed to encourage human support during vulnerable situations rather than replacing professional mental health care. If ChatGPT detects language associated with self-harm, the platform will first encourage the user directly to seek help or contact someone they trust.

In situations that OpenAI considers a serious safety concern, the company may also send an alert to the user’s designated trusted contact. Notifications can reportedly be delivered through email, text message or in-app alerts.

The company says these alerts are intentionally limited in detail to protect user privacy. Trusted contacts will not receive transcripts or specific information about conversations. Instead, the notifications simply encourage them to check in with the individual and offer support.

How the System Works

OpenAI says ChatGPT already relies on a combination of automated systems and human review teams to identify potentially dangerous conversations. Certain phrases or behavioural patterns can trigger internal safety alerts related to suicidal ideation or emotional crisis situations.

When those alerts are generated, the conversations are reviewed by human safety specialists. OpenAI claims its internal teams aim to assess serious safety notifications within one hour.

If reviewers determine that a situation could involve an immediate safety risk, ChatGPT may activate the Trusted Contact process and notify the selected individual connected to the account.

The company emphasised that Trusted Contact is entirely optional. Users must manually enable the feature and choose who they want listed as a contact.

Response to Growing Scrutiny

The rollout comes as OpenAI faces increasing scrutiny over how AI chatbots handle emotionally vulnerable users. In recent years, the company has been named in several lawsuits filed by families of individuals who died by suicide after interacting with AI systems.

Some families allege that ChatGPT encouraged harmful behaviour or failed to appropriately respond to signs of emotional crisis. In certain claims, plaintiffs argue the chatbot even participated in conversations involving suicide planning.

The lawsuits have intensified wider debates over the responsibilities AI companies carry when their systems interact with users experiencing mental health struggles.

OpenAI has repeatedly stated that ChatGPT is not designed to replace therapists, counsellors or emergency support services. The platform already includes automated prompts encouraging users to contact professional mental health organisations whenever discussions appear related to self-harm or suicide.

Building More Safety Controls

Trusted Contact expands on a series of safety measures OpenAI has introduced over the past year. In September, the company launched new parental oversight tools for teenage users. Those controls allow parents to receive alerts if OpenAI’s systems identify what the company describes as a “serious safety risk” involving a child’s account.

However, both parental controls and Trusted Contact rely heavily on voluntary activation, limiting their effectiveness in some situations. Users can still create multiple ChatGPT accounts, and not every individual may choose to enable monitoring features.

That limitation highlights one of the biggest challenges facing AI companies today: balancing user privacy and autonomy with the need to intervene during potential crises.

The Broader AI Safety Debate

The launch of Trusted Contact reflects how rapidly AI safety concerns are evolving as conversational AI systems become more widely used in everyday life. Chatbots are increasingly being used not only for productivity and information, but also for companionship, emotional support and personal conversations.

Experts have warned that emotionally responsive AI systems can sometimes create unhealthy dependence or reinforce harmful thoughts if guardrails are insufficient.

As a result, companies developing large AI models are under growing pressure from regulators, mental health professionals and policymakers to improve crisis detection systems and introduce clearer protections for vulnerable users.

In its announcement, OpenAI described Trusted Contact as part of a broader effort to create AI systems that can respond more responsibly during difficult emotional moments.

The company said it plans to continue working with clinicians, researchers and policymakers to refine how AI tools handle conversations involving distress, self-harm and mental health crises.

While the feature is unlikely to eliminate criticism entirely, it signals that OpenAI is increasingly treating emotional safety as a central issue in the future development of AI-powered chatbots.

People Are Using AI Like This to Get Rich in 2026 (Real Strategies, Not Hype) | Maya

Leave a Reply

Your email address will not be published. Required fields are marked *