Welcome to Healthcare AI News, your weekly dose of the latest developments and headlines in the world of Healthcare AI.
In this issue, we explore:
✅ Headlines: Airborne telemedicine
✅ Industry: Doctors remove 3-inch parasitic worm from woman’s brain
✅ Feature: Healthcare Foundation Models
✅ Interesting Reads: How to know if AI becomes conscious
✅ Tech: Introducing ChatGPT enterprise
✅ Venture Pipeline: Cellares announces $255M series C
Healthcare Foundation Models
Foundation models are key to unlocking healthcare AI’s power. How can we adopt them?
What do you think is one of the biggest pitfalls of the current state of medical AI?
Cost? Murky applicability? A lack of industry trust and investment?
What if we told you that one of the ways to address some of these big issues would be to make all of medical AI more like ChatGPT? And no, we don’t mean giving cancer diagnostic models the ability to tell jokes.
We mean using the deep learning AI approach powering AI models like GPT-4 and Stable Diffusion. These kinds of models—trained on huge amounts of unlabeled data—are called foundation models.
Today, we’re discussing these kinds of models and how they apply to medical AI. Because we can’t keep focusing on innovation and our excitement about dreamed-up AI-enabled applications and startups. We need to build a more hospitable medical AI ecosystem.
Investment in foundation models is a great way to do that. In this feature, we’re going to go over why that is—and how you can help.
The Problem: Medical AI implementation is complex and costly
The current paradigm of healthcare AI is great in theory—a bit less in practice.
Published studies and tech demos boast the possibilities of AI to improve every area of healthcare—from clinical diagnostics to staffing. But actual implementation is harder.
As an example, Stanford’s Institute for Human-Centered Artificial Intelligence points out that—of the roughly 593 models designed for predicting COVID-19 outcomes, virtually none are used clinically.
So why that disconnect? There are a few key reasons:
EHR data is overwhelmingly multimodal, consisting of images, lab data, written notes, et cetera. Most of today’s medical AI models can’t parse all of that data together.
For the most part, healthcare models require ad hoc training sets—which Stanford HAI estimates cost upwards of $200,000.
Speaking of costs: The costs of deploying and maintaining healthcare models increase due to their applicability to individual, highly-silo-ed tasks across healthcare.
And with that level of specificity and precision, there’s the need to constantly re-train when applying a model to an even slightly different context—such as when diagnostic standards for a condition are updated.
In summary? Using AI in healthcare is complex and costly—and not just on the front end. And it will continue to be so. Unless we make some key structural changes.
The Theoretical Solution: Foundation models for medical AI
Using foundation models makes model adaptation a much smoother endeavor. But how?
Without getting into too much jargon-y detail, these models use a process called “pretraining” to increase a model’s ability to adapt to new parameters and samples down the line—a process called “transfer learning.”
These models can then more easily be adapted to new contexts and purposes. They solve some of our industry’s biggest AI implementation issues because they’re both modular and reusable. In other words: they would allow us to move beyond one-purpose algorithms.
This sustainable approach to AI models would allow us to pivot our brainpower from concentrating on algorithm design, model retraining, and model maintenance to what we’re really here for: the application of AI to healthcare.
Bridging the Gap: How can we promote this solution to the rest of our industry?
Now, that all sounds lovely in theory. But how do we actually move from writing about these solutions to seeing them applied to healthcare AI in the real world?
Simple: We must work to change the status quo. (Well, maybe that’s not so simple.)
One of the best ways to change the way things are done is through conversations and education. As we know from our industry’s experience adopting EHRs, government mandates and value-based payment were just part of the equation that made practices and hospitals finally abandon paper charts. Another huge part of it was training and building trust. After all, healthcare—perhaps more than any other industry—is founded on trust.
So, how do we build trust in—and even excitement about—this approach to medical AI implementation? By talking about it.
Specifically, we encourage you to support the work of startups and researchers dedicated to building foundation models, APIs to disseminate them, and relevant training datasets (more on that in a later issue).
Part of why we write these features is to help you, our audience, better understand the realities of healthcare AI today. But really, we want you to be empowered to change this landscape.
If you ask us, foundation models are the treatment our weak medical AI paradigm needs to grow strong (excuse the cheesy healthcare metaphor). As an informed stakeholder, you can help to bring that future about.
So, to better help you on that journey, we want to know: What do you want to better understand about healthcare foundation models? Reply here and let us know.