Utah allows an AI to refill prescriptions

For a limited set of medications, the company behind the AI says it is just as good as a doctor. (Picture: Adobe)
Doctors and pharmacists alike are voicing subtle and not-so-subtle warnings about cutting doctors out of the loop.

The AI, created by Doctronic, matched physicians’ prescription plans 99.2% of the time in a test shared by the company.

— The AI is actually better than doctors at doing this, said Dr. Adam Oskowitz, Doctronic co-founder and an associate professor of surgery at the University of California San Francisco, to Politico. — When you go see a doctor, it’s not going to do all the checks that the AI is doing.

The program will be limited to a 190-200 item list of common, non-controlled medications — and will not be writing out new ones. That means the AI won’t be prescribing things like opioids or ADHD drugs.

The American Medical Association says that while AI «has limitless opportunity to transform medicine for the better,» it warns of «serious risk» to patients without a physician in the loop.

Prescription renewals account for 80% of medication activity in Utah, notes The Utah Department of Commerce.

Read more: Press release from The Utah Department of Commerce. Writeups on Politico, Ars Technica, The Washington Post.

OpenAI announces ChatGPT for healthcare, clinicians and institutions

From scientific discoveries to sifting through millions of peer-reviewed studies, doctors armed with ChatGPT can lead to better outcomes, OpenAI says. (Picture: Adobe)
Just a couple of days ago, OpenAI announced health tools for consumers and patients — and now it’s coming for the doctors and hospitals, with a HIPAA compliant AI product.

ChatGPT for healthcare can instantly draw upon millions of peer-reviewed research studies, health guidance and clinical guidelines, and can help clinicians reason through cases with greater confidence, OpenAI says.

We also know from the earlier days of ChatGPT that it can have an uncanny ability to compare millions of medical images to support a diagnosis — by doctors, not patients.

Continue reading “OpenAI announces ChatGPT for healthcare, clinicians and institutions”

OpenAI launches ChatGPT Health, an encrypted service for health data

OpenAI wants access to your medical journal, but won’t be giving clinical advice — that’s for doctors to do. (Picture: OpenAI)
230 million users every month have health questions for ChatGPT, and now they are building a «Secure Enclave» for ChatGPT Health.

The service will most importantly have access to your medical journals, but also patches into some popular services, like Apple Health, Function, MyFitnessPal, Weight Watchers, Instacart and Peloton.

Continue reading “OpenAI launches ChatGPT Health, an encrypted service for health data”

OpenAI sees 40 million daily prompts about health care

Ever more people are asking ChatGPT medical questions, OpenAI finds. (Picture: Adobe)
Even after restricting some health care questions, health prompts are booming on ChatGPT, according to a new OpenAI survey.

5% of all questions globally are about health, one in four questions per week are the same — and that adds up in aggregate to a really big number.

It’s not just normal users curious about care that ask the chatbot, but health care professionals, too. Of these, 46% of nurses, 41% of pharmacy workers, and 48% of physicians say they use ChatGPT once per week or more.

The questions usually roll in outside of normal clinical hours, and happen especially in health care deserts, OpenAI says.

OpenAI has great ambitions for the field, and says in their report that the next boundary to cross is for robotic wet labs and physical AI.

They also tick off a long list of how OpenAI can aid in scientific discovery and drug development.

Read more: OpenAI’s report, writeups on Gizmodo and Axios.

OpenAI says policy on health not changed, ChatGPT still answering

You can still ask ChatGPT about health, just not presonal advice.
OpenAI clarified its policy, but it was not a new one, they say. (Picture: Adobe)
According to several social media posts about an updated OpenAI guidance this weekend, it could seem that professional advice had been banned on the platform. It has not, according to OpenAI themselves.

The policy wasn’t new, nor a change, but a consolidation of several different ones, and led to some confusion, it would seem.

OpenAI’s head of health, Karan Singhal, denies any changes entirely, saying ChatGPT will «continue to be a great resource for health information:»

Continue reading “OpenAI says policy on health not changed, ChatGPT still answering”

OpenAI shuts down pipeline of professional advice on ChatGPT [updated]

Reddit used to be democratizing tool for expert professional advice, but now it's all over. OpenAI lawyered up.
You can no longer use ChatGPT as your personal doctor, as it defies the EU AI Act and FDA guidance, according to OpenAI. (Picture: Adobe)
UPDATE: OpenAI says there are no changes, simply a consolidation of several usage policies that might have lead to confusion.

OpenAI has updated ChatGPT’s usage policies of October 29, banning a vast swath of content where it was arguably the most useful — as in interpreting medical imagery and helping with medical diagnosis, and offering legal or financial advice.

The idea is to stop ChatGPT (and any other OpenAI model) from giving advice that could be interpreted as professional, fiduciary, or legally binding guidance, as required by the EU AI Act and American FDA guidance.

Continue reading “OpenAI shuts down pipeline of professional advice on ChatGPT [updated]”