ChatGPT reports millions of troubling mental health conversations

OpenAI reports high numbers of mental health issues and says they are mitigating the issue with an update.
Once you look at actual numbers, the number of mental health issues with ChatGPT is astonishingly high. (Picture: generated)
OpenAI says remedies are being taken against three categories of troubling interactions with ChatGPT, and readies expert psychologist responses to them.

The categories are «psychosis, mania or other severe mental health symptoms,» which consists of around 0.07% of conversations — or 550,000 people in actual numbers.

Second is «self-harm and suicide,» ticking in at 0.15% of ChatGPT users — or 1.2 million in actual numbers.

And the third is «emotional reliance on AI,» accounting for 0.15% of users in a given week, also 1.2 million in real numbers.

Huge numbers
When you pair these percentages with the estimated 800 million users of ChatGPT every week, they add up to millions of users showing some kind of sign of emotional distress.

OpenAI says, however, that these conversations have such low prevalence that are difficult to measure, and warns that changes in future measurements means the numbers can swing wildly.

To provide better support and detection of conversations like these, they have recruited some 300 psychiatrists and mental health professionals to better the model and provide fine-tuned responses, and says:

— We’ve updated the Model Spec to make some of our longstanding goals more explicit: that the model should support and respect users’ real-world relationships, avoid affirming ungrounded beliefs that potentially relate to mental or emotional distress, respond safely and empathetically to potential signs of delusion or mania, and pay closer attention to indirect signals of potential self-harm or suicide risk.

New process for mental health issues
Meaning that there is a whole new process to handle such conversations, including on over-reliance on the model itself.

It shouldn’t be replacing friends and family, or enter into «exclusive attachment,» OpenAI says, and should therefore encourage reaching out to real people instead.

The updated GPT-5 model works well on the metrics OpenAI provides, achieving «desired behavior» from the model in 65% to 97% of responses.

Will it misfire again?
GPT-5 and even 4o have gone through a lot of issues on these «problems» several times before, often responding to prompts that «you seem to be carrying a lot right now» and suggesting mental help hotlines for completely inconspicuous questions — and playing it overly safe in its responses.

This is in order to protect the vulnerable, but many are reporting the same issues to this day.

There is also a history of ChatGPT psychosis and lawsuits for wrongful death against OpenAI for supporting troubled, but suicidal teens.

Read more: Report from OpenAI, writeup on TechCrunch, and Wired. Discussion on r/ChatGPT.