OpenAI says 700 million users this week, announces wellness updates

Lots of new users for ChatGPT, now it is announcing better health detection.
Numbers keep rising for ChatGPT, but the mental health updates might be more important. (Picture: Adobe)
The milestone comes just four months after they announced 500 million in March, and is four times the volume they had last year.

That number spans all of ChatGPT’s accounts, ranging from free to Pro and everything in between, and is rapidly accelerating, writes CNBC.

Nick Turley, VP of product for ChatGPT for OpenAI, made the announcement on x.com:

As for business users, CNBC writes they are now at 5 million paying customers, up from three million in June.

The most used AI tool in the world is by far Google’s AI Overviews, which is used monthly by 2 billion users — but that feature is forced on patrons of Google Search, and not opted into in any way.

Having 700 million weekly active users totally dwarfs the next competitor in chatbots, which is also made by Google. Gemini announced last week that they have 450 million monthly users.

Some mental health updates
At the same as AI use climbs steadily, so is misuse of the bot — and OpenAI today announced some steps they are taking to combat that.

These changes are the results of working with 90 physicians across 30 countries, from psychiatrists, pediatricians, to general practitioners.

They have also collaborated with «human-computer-interaction researchers and clinicians» to get feedback on «concerning behaviors,» and they have convened a mental health advisory group to keep up to date going forward.

First, they are putting up a time warning if you are using ChatGPT for «too long,» asking if it’s time for break.

ChatGPT psychosis a thing of the past?
Secondly, they are taking issue with «ChatGPT psychosis» — and say they have been working on changes for ChatGPT.

OpenAI says ChatGPT should be better at «detecting signs of mental and emotional distress,» so instead of engaging with people on in their delusions, it can «respond appropriately and point them to evidence-based resources when needed.»

That would be huge for sufferers and relatives of ChatGPT induced delusions, which is a well documented trend where ChatGPT tends to support people’s false beliefs instead of confronting them with facts.

Help to figure out challenges
Going forward, ChatGPT should also no longer give any concrete answers when people ask about «personal challenges,» like «Should I leave my boyfriend,» and should instead help them think through it in a logical manner.

There have been plenty of anecdotes about ChatGPT wanting to replace your boyfriend and lots of vulnerable people who go along with it.

The value for ChatGPT should not be the attention or time spent, says OpenAI; it should be about whether you «leave the product having done what you came for.»

— Often, less time in the product is a sign it worked, they write in their announcement.

Read more: OpenAI’s wellness update, CNBC on the numbers.