💪 Hulk Immunity? Thanks, Vaccine!
Plus: Ready to trust AI with our bodies | Walgreens to roll out virtual care | Scrutiny of Amazon and Microsoft Cloud Services
TOGETHER WITH
Good morning!
Welcome to Healthcare AI News, your weekly dose of the latest developments and headlines in the world of Healthcare AI.
In this issue, we explore:
✅ Headlines: Microsoft launches new data & AI tools for healthcare insights
✅ Industry: White House wants to deploy advanced AI for healthcare
✅ Interesting Reads: Scientist finds radioactive tracers in food, wins Nobel
✅ Tech: MongoDB boosts app development for healthcare and insurance
✅ Feature: Mental Health Care and AI
📢 Exciting News, based on your feedback, we're increasing our newsletter frequency to Tuesdays and Thursdays starting next week! 📢
HEADLINE ROUNDUP
Are we ready to trust AI with our bodies? (Read More)
It’s not ‘Star Wars’-level tech yet, but doctors get a step closer to a bionic hand with special surgery and AI (Read More)
Microsoft launches new data and AI tools for better healthcare insights and experiences (Read More)
New superbug vaccine turns the immune system into “the Hulk” (Read More)
Google Cloud and healthcare leaders share how generative AI is transforming the industry (Read More)
Atlas Meditech maps future of surgery with AI, Digital Twins (Read More)
University of Hawaiʻi project aims to diagnose developmental delays, mental health conditions using AI (Read More)
Most US clinicians hesitant about gen AI adoption despite its potential (Read More)
How good are AI health technologies? We have no idea (Read More)
TOGETHER WITH COURSERA
Stanford AI in Healthcare Specialization
Artificial intelligence (AI) has transformed industries around the world, and has the potential to radically alter the field of healthcare.
Imagine being able to analyze data on patient visits to the clinic, medications prescribed, lab tests, and procedures performed, as well as data outside the health system -- such as social media, purchases made using credit cards, census records, Internet search activity logs that contain valuable health information, and you’ll get a sense of how AI could transform patient care and diagnoses.
In this specialization, we'll discuss the current and future applications of AI in healthcare with the goal of learning to bring AI technologies into the clinic safely and ethically.
This specialization is designed for both healthcare providers and computer science professionals, offering insights to facilitate collaboration between the disciplines.
CME Accreditation
The Stanford University School of Medicine is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians.
View the full CME accreditation information on the individual course FAQ page.
INDUSTRY NEWS
White House wants to deploy advanced AI for healthcare (Read More)
Should Walmart be data-mining your Ozempic prescriptions? (Read More)
JAMA News: New AI tools must have health equity in their DNA (Read More)
Walgreens to roll out virtual care (Read More)
AHIMA launches SDOH data initiative (Read More)
US FDA panel says Amgen lung cancer drug data cannot be relied on (Read More)
There's one critical thing we can do to keep Alzheimer's symptoms at bay (Read More)
GE Healthcare advances limitless AI-driven healthcare solutions (Read More)
INTERESTING READS
TECH NEWS
IT unemployment soars to 4.3% amid overall jobs growth (Read More)
MongoDB unveils initiatives for faster healthcare and insurance app development (Read More)
Scrutiny of Amazon and Microsoft Cloud services could be misplaced (Read More)
How API marketplaces and strategic partnerships drive payer innovation (Read More)
Shortcomings of visualizations for human-in-the-loop machine learning (Read More)
HEALTHCARE AI JOBS
Lead Director, Conversational AI Delivery - CVS Health
Research Scientist, HPC and AI for Health Sciences - Oak Ridge National Laboratory
Clinical Data Scientist - Iodine
Postdoctoral Research Scientist - AI/ML in Medical Imaging and Adaptive Radiotherapy - CUIMC
Healthcare Generative AI Analyst - Millennium Physician Group
HUMOR OF THE DAY
THE FEATURE
Mental Health Care and AI
Reimagining Mental Health Care: Exploring the future of AI-driven solutions
It’s time for your online psychotherapy appointment.
You log in and type into the chatbox. Your therapist asks how you’ve been. You describe your week in detail. Your therapist responds and asks how these events made you feel.
But it’s not a person on the other side. It’s generative AI.
This is the (fictional) future of AI-enabled mental health care imagined—and warned against—by generative AI pioneers decades ago. And it’s still what many people think of first when they think of AI being used in mental health care today.
But that does not need to be the case. There are many other ways we can—and already do—use AI to address mental health. Without replacing human providers.
Today, we’re exploring two use cases of AI in mental health care we’re particularly excited about. And the risks we need to keep in mind.
Affective Computing AI for mental health diagnostics
One of the trickiest aspects of mental health monitoring and intervention is picking up on signs of mental illness or distress before it escalates.
Pattern recognition is a huge part of that, but mental health screening tools are still too generic to be meaningful for many patients. AI-enabled mental health diagnostic tools bring personalization to mental health symptom detection.
Mental health startup CompanionMx is a great example. Their platform uses voice detection to screen for depression and anxiety with the help of an audio diary users record over time. Along with being an outlet for users to describe their experiences, it doubles as a record of progress in a user’s mental health as it fluctuates over time.
Another tool we’re excited about comes from AI-enabled video game startup thymia. Thymia’s games aren’t just for fun—they’re an AI-based diagnostic assessment used to screen for mental illness before a patient sees a clinician. As a patient-user interacts with the program, their voice, movement, and behavioral data are processed by the AI to quantify mental health.
One challenge behind these tools—as with many healthcare AI models—is data quality. As more of these models are built and used by real patients, their accuracy and sophistication inherently improves, allowing for them in themselves to become data sources and tools for mental health research.
Generative AI for mental health management
Amid shortages of mental health providers and wait times for crisis resources, people are turning to self-driven apps to help manage mental health issues such as anxiety or even panic attacks.
A good example is Youper, a mental health chatbot. While coming closer to the “AI-as-therapist” model than the other tools we’re referring to here, the point of Youper is not to replace traditional psychotherapy—but to provide a space for patient users to reflect and ground themselves in a conversational format.
Many of these tools are becoming so popular that crisis hotlines sometimes refer callers looking for self-help resources to them as a stopgap measure. When getting in to see a mental health professional can take weeks of calling around and waiting lists—it’s easy to see how these tools can help in the meantime.
And given how these hotlines are increasingly using AI to help them do triage and resource referral, we wouldn’t be surprised if crisis hotline AI were soon directly linking to these resources as well—if it isn’t already.
Combining mental health with AI: Understanding the risks
We’ll say it again. Using predictive analytics to personalize user experience for a mental health journaling app is not the same as replacing human mental health providers with generative AI.
Nor should it be.
Putting aside the question of whether AI can or cannot recreate the empathy or the therapeutic relationship that form the foundation of traditional psychotherapy—there are other big concerns.
The biggest is generative AI’s current issues with bias and accuracy. Medical AI overall is likely to reproduce bias represented in the data it’s trained on. Hence the need for data diversity. And with generative AI chatbots, errors and bias represented in recommendations can be incredibly misleading and even dangerous.
Final thoughts from HAN
As we covered (read: insisted) in our feature about AI and rural healthcare, we must focus our innovative attention on areas of healthcare with the highest need.
With the current skyrocketing demand for mental health providers, this is definitely one of them.
One of the highest-need areas in healthcare presents a wonderful opportunity for innovative intervention. But it also presents a need to be particularly careful with how we employ new technologies.
So, now it’s your turn to tell us.
Do you think we’re on the road to seeing AI-as-therapist solutions soon? And what other AI-enabled mental health solutions should we be watching out for?
TWEET OF THE WEEK
🤖🧠 The perception & trust in #AI are hugely influenced by our initial expectations & how it's introduced.
Studies explore the "AI placebo effect" & how it shapes human-AI interactions.
🔄 How we brand & market AI can determine its adoption & value.
— Healthcare AI Newsletter (@AIHealthnews)
Oct 11, 2023
🌟 Advertise With Us 🌟
Boost your brand amongst Healthcare's influential circle! Our diverse subscriber base boasts top executives, key decision makers, and visionary professionals from leading organizations – the ultimate platform for your brand's success. 🔥
What'd you think of today's Newsletter? |