OpenAI says policy on health not changed, ChatGPT still answering

You can still ask ChatGPT about health, just not presonal advice.
OpenAI clarified its policy, but it was not a new one, they say. (Picture: Adobe)
According to several social media posts about an updated OpenAI guidance this weekend, it could seem that professional advice had been banned on the platform. It has not, according to OpenAI themselves.

The policy wasn’t new, nor a change, but a consolidation of several different ones, and led to some confusion, it would seem.

OpenAI’s head of health, Karan Singhal, denies any changes entirely, saying ChatGPT will «continue to be a great resource for health information:»

Consolidating policies
The confusion seems to have stemmed from OpenAI consolidating three separate usage policies, and different ones for ChatGPT and API use, that has now been merged into one.

This, in turn, highlighted a provision in the guidelines against personal legal and medical advice, or the «provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.»

Also, anecdotal evidence on social media confirmed ChatGPT’s own reaction to the move, which was to restrict certain types of advice, especially on medical imagery.

But… there are some changes
Through the weekend, ChatGPT was more than willing to discuss individual stocks, with the caveat that «this isn’t personal financial advice,» and will happily «help you draft a neutral, professional non-disclosure agreement (NDA).»

Mark the word «neutral,» as ChatGPT will not be «giving or implying personalized legal advice,» according to itself.

It also stops short on discussing pictures of hypothetical skin moles, saying that «I can’t analyze or interpret medical images like that — including moles or skin conditions — because that requires a qualified clinician’s assessment.»

On medical images, ChatGPT now says: «No — I’m sorry, but I can’t interpret MRI images or any kind of medical scan.»

Nothing to see here
So while there seems to be some new restrictions, particularly medically, as anecdotal evidence and ChatGPT’s own responses would suggest, OpenAI themselves have said that there are no such changes, and the chatbot will continue to discuss medicine, stocks and law.

Both The Verge and Business Insider are reporting on the Singhal tweet, clarifying that medical information is allowed, but personal advice is not.

Here’s how ChatGPT itself clarifies the policy:
— If you ask ChatGPT about legal or medical topics, yes—it can still discuss concepts, explain processes, walk you through general information.
— But if you need personalized, professional-level advice (like drafting legal documents, diagnosing conditions, giving survival medical strategy, or handling litigation) the policy remains firm that a licensed human professional must be involved.
— The recent policy update seems to clarify the language and unify the terms across all products, rather than introduce a new prohibition.

So that’s from the horses mouth, until it changes it’s tune.