ChatGPT reports millions of troubling mental health conversations

OpenAI reports high numbers of mental health issues and says they are mitigating the issue with an update.
Once you look at actual numbers, the number of mental health issues with ChatGPT is astonishingly high. (Picture: generated)
OpenAI says remedies are being taken against three categories of troubling interactions with ChatGPT, and readies expert psychologist responses to them.

The categories are «psychosis, mania or other severe mental health symptoms,» which consists of around 0.07% of conversations — or 550,000 people in actual numbers.

Second is «self-harm and suicide,» ticking in at 0.15% of ChatGPT users — or 1.2 million in actual numbers.

And the third is «emotional reliance on AI,» accounting for 0.15% of users in a given week, also 1.2 million in real numbers.

Continue reading “ChatGPT reports millions of troubling mental health conversations”

OpenAI to route sensitive prompts to reasoning models, introduce parental controls

Messages of acute distress will be routed to reasoning models in the future.
ChatGPT should better detect mental health issues, and OpenAI has convened a panel of experts. (Picture: generated)
Following a teen’s suicide and another murder-suicide aided by ChatGPT in a single week, OpenAI is proactively announcing wellness updates coming in the next months.

This includes alerting parents of teens in distress, routing queries to a more powerful reasoning model when appropriate, and giving parents more control over their kid’s usage.

The company has assembled a council of experts in «youth development, mental health, and human-computer interaction,» which will shape how AI will «support people’s well-being,» they say in a blog post.

Continue reading “OpenAI to route sensitive prompts to reasoning models, introduce parental controls”

Sam Altman addresses ChatGPT psychosis, calls them «extreme cases»

According to anectdotal evidence, you'd think ChatGPT psychosis is epidemic.
Only «a small percentage» get delusional from ChatGPT use, Altman says. (Picture: Adobe)
As more and more publications are digging into people getting delusional from AI use, being led down rabbit holes or thinking they are superhuman, the CEO of OpenAI addressed the topic today.

In a lengthy x.com post, Altman considers the issues as «edge cases,» but welcomed both attachment and using ChatGPT as a kind of «life coach.»

Recently, OpenAI announced a wellness update to reduce sycophancy and push back against delusions, and the hope is that this can reduce some of the risks:

Continue reading “Sam Altman addresses ChatGPT psychosis, calls them «extreme cases»”