GPT-4 and the Curbside Consult: LLMs for better clinician information retrieval
When non-physicians hear the term “curbside consult,” they may think of getting some advice on the side of the road.
But if you’re clued in, you know the curbside consult is one of a provider’s most vital tools. Especially when a patient case is particularly hard.
In layman’s terms, a curbside consult is the act of asking a clinical colleague for an opinion about a tough case.
Now, this vital tool is getting an upgrade.
What kind of upgrade? That trusted colleague can now be a large language model (LLM) like GPT-4.
Of course, while studies reporting how ChatGPT can pass medical licensing exams are cropping up in the news, you likely won’t be seeing Dr. GPT-4 for your sinus infection anytime soon.
However, in certain scenarios, your doctor may be turning to a chatbot for a second opinion.
Here’s how LLMs are already being factored into clinical consultations—and where we can expect them to go in the future.
The Curbside Consult: What it is and how technology is changing it
These consults are some of the most casual interactions clinicians will have with one another—short of running into each other at a bar while they’re off the clock.
They can be off the cuff, occurring in hallways and cafeterias.
However, that doesn’t mean they’re unprofessional. These kinds of interactions still require clinicians to adhere to their code of ethics and deeply trust one another.
Of course, AI-enabled operations and diagnostics startups have already started claiming to be clinicians’ newest assistants. Are these clinical information retrieval LLMs any different?
The “curbside consult” generative AI models differ from these tools in their versatility.
The vision behind them is that a clinician could come to them with any tough clinical question and the model would provide a sophisticated response. One that would be useful to the provider, revealing a new insight.
LLMs go to school: How medical trainees are learning to use chatbots
Medical educators are seeing this potential.
And even before the models are ready for widespread implementation, they know tomorrow’s clinicians need to be ready to use them.
A large part of that training is learning how to query the chatbot. For instance, residents learn to opt for phrasing like “You are a doctor seeing a 39-year-old woman with knee pain” instead of using the system like a search engine.
A positive aspect of this kind of training? Building AI-informed physicians from the ground up. These future doctors will more nimbly be able to use the technology as it “grows up” with them.
But critics have also been voicing concerns: Will these “baby doctors” become too dependent on chatbots from the start of their careers?
How can LLMs become more useful for clinicians?
Before we get ahead of ourselves, we should mention that scientists working with LLMs developed for this purpose are still wary.
Researchers from Stanford HAI acknowledge that, while the technology is very promising, the systems must be further refined. One of their key suggestions is providing uncertainty estimates for low-confidence answers.
After all, no one wants to ask the over-confident guy who’s wrong half the time for advice.
But wariness isn’t going to stop this technology from growing and disseminating. With more and more investment pouring into them, LLMs are scaling up and becoming more advanced.
Without formal top-down regulation, the onus is mostly on healthcare institutions and companies to work out the kinks of these tools themselves.
And in the vein of self-regulation, one of the most exciting initiatives we’ve seen is the generative AI “prompt-a-thons” spearheaded by institutions like NYU Langone. These citizen science events aim to uncover hallucinations and build more accurate clinical chatbots. We’re all for it.
Ultimately, what it all comes down to with these tools is the need.
There’s no question. Healthcare has a staffing problem. And in clinical care, time is a precious resource. Tools that solve these problems—even if imperfectly—are going to be in high demand.
Plus, in complicated clinical cases, the availability of niche specialists in the blink of an eye is a harder ask for our health system. This is one of the best use cases we can think of for medical LLMs.
But especially when the stakes are so high, we must all do our part to keep refining this technology to minimize errors.
Now, a question (or three) for clinicians reading this:
How do you feel about turning to an LLM for a curbside consult? Have you done it? What was it like? We’d love to know!