
In a lengthy x.com post, Altman considers the issues as «edge cases,» but welcomed both attachment and using ChatGPT as a kind of «life coach.»
Recently, OpenAI announced a wellness update to reduce sycophancy and push back against delusions, and the hope is that this can reduce some of the risks:
Only affect small percentage
— Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, he writes, adding that they will continue to «treat adult users like adults.»
That means in general that «Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot,» which is troubling to OpenAI and society as a whole, he adds.
The idea is to make ChatGPT «in some cases» push back against delusional thinking, «to ensure they are getting what they really want.»
Welcomes life coaching
It could be a good thing that people use ChatGPT as a life coach or therapist, and he says «A lot of people are getting value from it already today.»
In the future, people could be so trusting of ChatGPT’s advice that they’ll use it for their most important life decisions, but Altman says that makes him uneasy, and «we have to figure out how to make it a big net positive.»
OpenAI was surprised to find that people were so attached to ChatGPT 4o, and admits it was a mistake to deprecate old models that they depend on so «suddenly.»
Billions of users to act this way
However, he both warns and welcomes that soon «billions of people may be talking to AI in this way,» looking for advice from a life coach — and they need to get it right, he says.
The way AI works is different from how we used older tech when it was new, he says, and they have a lot more data on usage patterns than they did with, say, the personal computer — and AI can even talk back directly and try and ground the user in reality with facts and unbiased data.
Read more: Altman’s x.com post