OpenAI to route sensitive prompts to reasoning models, introduce parental controls

Messages of acute distress will be routed to reasoning models in the future.
ChatGPT should better detect mental health issues, and OpenAI has convened a panel of experts. (Picture: generated)
Following a teen’s suicide and another murder-suicide aided by ChatGPT in a single week, OpenAI is proactively announcing wellness updates coming in the next months.

This includes alerting parents of teens in distress, routing queries to a more powerful reasoning model when appropriate, and giving parents more control over their kid’s usage.

The company has assembled a council of experts in «youth development, mental health, and human-computer interaction,» which will shape how AI will «support people’s well-being,» they say in a blog post.

How to deal with despair
The emphasis in the post is experts and evidence-based research on «how our models should behave in mental health contexts.»

The problem is that a lot of people are being led down rabbit holes through ChatGPT, and especially from the 4o model, so OpenAI are making some changes over the next «120 days.»

First off, all models will soon «detect signs of acute distress» and route those chats to a reasoning model like GPT-5-thinking or o3.

These models are better at thinking in context and will more «consistently follow and apply safety guidelines,» OpenAI says.

For the last few days, following last weeks updates, users at r/ChatGPT have reported numerous cases of ChatGPT simply saying «It sounds like you are carrying a lot right now» and «You can find supportive resources here [link],» which would be part of these wellness features, although it seems to be misfiring a lot.

Parental controls
Secondly, parental controls will be enabled «Within the next month.»

This will allow parents to link their teens account through email, and control how ChatGPT responds through «age-appropriate behavior rules» — which sounds a bit fanciful (can you set an age and have ChatGPT respond only with appropriate content?).

Parents should also be able to switch off features like memory and chat history, which could impact how ChatGPT follows up on previous preferences and overall goals.

Notifications of trouble
Then there’s the real kicker: It will be possible to receive notifications when ChatGPT detects a «moment of acute distress» — basically an alarm function. This probably means that teens will stop trusting it, because it will snitch to their parents.

OpenAI says they will use expert input on this feature to «support trust between parents and teens,» which goes to show they know how sensitive this function will be.

This is just the beginning, OpenAI says, as these expert panels start weighing in, and they continue to weigh these real-word reports of actual harm and the completely rational response leading us closer to the thought police.

Read more: OpenAI’s announcement, writeups on TechCrunch, Mashable.