
—
OpenAI has updated ChatGPT’s usage policies of October 29, banning a vast swath of content where it was arguably the most useful — as in interpreting medical imagery and helping with medical diagnosis, and offering legal or financial advice.
The idea is to stop ChatGPT (and any other OpenAI model) from giving advice that could be interpreted as professional, fiduciary, or legally binding guidance, as required by the EU AI Act and American FDA guidance.
No more free advice
In short, this should act as a professional shield for doctors, lawyers and financial advisors, with ChatGPT now directing users to human experts instead of dispensing its own advice.
The change was first noticed when users found the model refusing to process or comment on medical images on Reddit, but the actual ban in ChatGPT is far wider than that:
— I can explain laws, court procedures, and regulations in general terms, or help you understand plain-language meaning, but I can’t draft, interpret, or advise on contracts, filings, or personal legal cases, ChatGPT now says, and adds:
— I can talk about economic trends, portfolio theory, or budgeting concepts — but I can’t recommend specific securities, trades, tax strategies, or give investment advice tailored to an individual.
Less useful where it matters most
This wipes out whole categories of information that users found particularly valuable. Lawyers and finance professionals are expensive — and not always reliable — whereas ChatGPT could handle complex topics and provide semi-good answers instantly.
Given the pressure on doctors’ time for clinical talk and diagnosis, many also relied on ChatGPT for informal health insights:
— ChatGPT did amazingly well in legal stuff, I benefited from it tremendously, and I saved so much money. probably this why they’re squeezing in on it, Reddit user bhannik-itiswatitis, comments.
— It also helped way more with my chronic illnesses than my doctors ever have. I saved a ton of money on pointless visits. Can’t have people cutting down on their healthcare costs, opines another Reddit user, 8bit-meow.
Whole new complex
The new rules don’t stop professionals from using ChatGPT to enhance their services — but it does pose an awkward question; how can you verify that you are, in fact, a medical professional seeking advice?
That might well be a question for another product all together, like ChatGPT Medical, Legal and Financial waiting in the wings.
Read more: OpenAI’s new usage policies, thread on r/ChatGPT.
2 thoughts on “OpenAI shuts down pipeline of professional advice on ChatGPT [updated]”
Comments are closed.