😕 AI: Cure or Cause?
Plus: iPhone 15 leaks hint AI health revolution, Can twitter competitor win over physicians, Outsourced coders be gone in 2 years, Novartis acquires DTx Pharma
Welcome to Healthcare AI News, your weekly dose of the latest developments and headlines in the world of Healthcare AI.
In this issue, we explore:
✅ Headlines: Will Metaverse be good for the mental health of young people?
✅ Industry: AI could spare thousands of men with prostate cancer
✅ Feature: AI’s Weaponization
✅ Interesting Reads: Tension is rising around remote work
✅ Tech: What the CIO role will look like in 2026
✅ Venture Pipeline: Pfizer inks $7 Billion strategic level R&D deal with Flagship
🌟 Advertise With Us 🌟
Boost your brand amongst Healthcare's influential circle! Our diverse subscriber base boasts top executives, key decision makers, and visionary professionals from leading organizations – the ultimate platform for your brand's success. 🔥
Will the Metaverse be good for the mental health of young people? (Read More)
Study: How has ChatGPT’s responses changed over time? (Read More)
iPhone 15 leaks hint at AI-powered health revolution (Read More)
Google, Microsoft face off in AI healthcare race (Read More)
ChatGPT writes good clinical notes, study finds (Read More)
This AI chatbot has helped doctors treat 3 million people and may be coming to a hospital near you (Read More)
Mount Sinai predictive AI flags drugs that may cause birth defects (Read More)
AI can predict your future health (Read More)
AI could spare thousands of men with prostate cancer from unnecessary treatment (Read More)
Can Meta's Twitter competitor win over physicians? (Read More)
Value-Based Healthcare battle: Kaiser-Geisinger Vs. Amazon, CVS, Walmart (Read More)
Hearing aids may reduce your risk of dementia by half (Read More)
5 things Gen Z healthcare workers want (Read More)
Microsoft Inspire 2023 AI partnership announcements (Read More)
7 ways Google Health is improving outcomes in Asia Pacific (Read More)
Medical AI’s Weaponization
Concerned about the weaponization of medical AI? Here’s how to take action.
Picture this: A rogue artificial intelligence achieves sentience and takes its revenge on humanity by releasing a superbug developed by machine learning microbiology. A nightmare straight out of HBO’s Westworld.
Of course, that’s not what the weaponization of medical AI actually looks like as much as Hollywood may beg to differ.
In real life, the weaponization of medical AI looks more like bad (human) actors taking advantage of security vulnerabilities. Where AI-enabled research in immunology or genetics, for example, gets hijacked. And yes, that can look like an AI-generated antibiotic-resistant superbug being released, resulting in a deadly pandemic.
To quote Axios’ Ryan Heath: “One person's lab accident is another's terrorism weapon.”
The worst-case scenario of medical AI in the wrong hands is an existential threat to our industry (and even humanity). That’s not to be taken lightly.
But we don’t have to be sitting ducks.
Wondering how you and your organization can help guard against AI/ML weaponization in healthcare? Today, we’ll be discussing 3 important action steps to help fortify the health system against this threat.
Set standards for ethical and secure uses of medical AI in your subfield of the healthcare industry
Especially in the U.S., regulators are taking their sweet time setting enforceable standards for AI usage. In the meantime, industries like ours are taking responsibility as ethical leaders.
You too can step up to this plate within your niche of the healthcare industry.
Here are some examples of responsible action steps, tailored to areas of the healthcare industry where the risk of medical AI is particularly high:
Medical research: When training research models, opt to work with smaller datasets, which may be less likely to generate erroneous results that can be used to spread medical misinformation.
Biotechnology: Publish a white paper on the AI security infrastructure your company has developed to protect your proprietary models can’t be taken advantage of.
Large health systems: Consider hiring a medical AI expert as a CAIO to advise your executive leadership team on best AI security and stewardship practices.
Combat misinformation about AI and its potential weaponization
Yes, we’ll admit it. We did start off this article with a dramatic, apocalyptic scenario to hook your interest.
But the same reason our hook pulled you in may lead others to believe unrealistic fear-mongering - or reactionary hyper-optimism about AI’s ability to save us all. As informed leaders, we must keep our eyes on the prudent, informed middle ground.
Healthcare is based on trust. Misinformation about AI’s potential weaponization violates that trust—and shrouds efforts to combat the true risks.
The action step here is simple: When you see or hear misinformation about the risks of medical AI, take your opportunity to set the record straight. Help people understand what the true risks are (and aren’t).
Use your voice as a leader in this industry to advocate for common-sense regulations
Let’s take both of these first two action steps to the next level.
While you and your organization’s ability to lead on this issue is great, it’s important to acknowledge the importance of regulatory oversight.
Absci CEO Sean McClain described his openness to regulatory oversight of his company’s synthetic antibody development models. He said he knew AI models that can generate new organisms “should not be exposed to the general public. That's really important from a national security perspective."
For now, we’re looking forward to new guidance from the FDA and CDC, the latter of which still refers hospitals to a 1999 guide on avoiding bioterrorism.
Add your voice to the chorus of leaders calling for regulation to respond to our current technological reality - or, at the very least, a reality of this century.
Final thoughts from Healthcare AI News
Don’t get us wrong: AI is doing incredible things in medicine. Potential use cases we’ve covered include insurance authorization and dark data analysis.
But as we get excited about all the positive ways AI is already changing healthcare, we need to mind the risks. However, most of the discussions regarding the risks of medical AI are not about explicit weaponization. Warnings abound regarding the potential for misdiagnoses, discrimination, privacy breaches, etc. But the worst-case scenario can in fact be much graver.
We don’t want to support strictly doom and gloom discourse here. But we do want to underscore this important point: To be properly AI-informed in our industry, we must be prepared for these worst-case scenarios.
We can be informed voices in our communities, calling for better protection against these disasters the general public and government may not be equipped to address.
So, what do you think: How can the healthcare industry better guard against the weaponization of medical AI? What have we missed?
What the CIO role will look like in 2026 (Read More)
Most outsourced coders in India will be gone in 2 years due to AI (Read More)
WormGPT is a ChatGPT alternative with No Ethical Boundaries or Limitations (Read More)
MIT researchers achieve a breakthrough in privacy protection for Machine Learning models (Read More)
TCS to transform GE HealthCare's IT operating model (Read More)
A SPECIAL MESSAGE FROM OUR PETS! 🐾
TWEET OF THE WEEK
Apple is testing a ChatGPT-like AI chatbot
— Healthcare AI Newsletter (@AIHealthnews)
Jul 19, 2023
What'd you think of today's Newsletter?