
Douglas Flora: Should AI-driven clinical decision support be regulated?
Recently, npj Digital Medicine published an article discussing how unregulated large language models produce medical device-like output (authored by Gary Weissman, Toni Mankowitz, and Genevieve Kanter).
While technology advancements like this are revolutionizing medicine and healthcare, there are many unresolved questions on the safe and efficient implementation of these methods in clinical practice.
Dr. Douglas Flora, Executive Medical Director of Oncology Services at St. Elizabeth Healthcare and President-Elect of the Association of Cancer Care Centers, Editor-in-Chief of AI in Precision Oncology, recently commented on the above-mentioned paper in his latest LinkedIn post:
“AI in Healthcare: A Regulatory Wake-Up Call?
Large language models (LLMs) like GPT-4 and Llama-3 are showing incredible promise in clinical decision support.
But here’s the catch: they’re not regulated as medical devices, and yet, they’re already generating recommendations that look a lot like regulated medical guidance.
A recent study found that even when prompted to avoid device-like recommendations, these AI models often provided clinical decision support in ways that could MEET FDA CRITERIA for a medical device.
In some cases, their responses aligned with established medical standards – but in others, they ventured into HIGH-RISK TERRITORY, making treatment recommendations that should only come from trained professionals.
This raises a big question: Should AI-driven clinical decision support be regulated? And if so, how do we balance innovation with patient safety? Right now, there’s no clear framework for LLMs used by non-clinicians in critical situations.
What does this mean for healthcare professionals?
- AI is advancing fast, and while it can be a powerful tool, it’s crucial to recognize its limitations.
For regulators?
- There’s an urgent need to define new oversight models that account for generative AI’s unique capabilities.
For AI developers?
- Transparency, accuracy, and adherence to safety standards will be key to building trust in medical AI applications.
As AI continues to evolve, we’re entering uncharted territory. The conversation about regulation isn’t just theoretical – it’s becoming a necessity.
What do you think? Should AI in clinical decision support be regulated like a medical device? Let’s discuss.” – Douglas Flora.
Read the article below and let us know what you think. Find out more about AI in Healthcare from Dr. Flora on OncoDaily.
“Unregulated large language models produce medical device-like output”
Authors: Gary Weissman, Toni Mankowitz, and Genevieve Kanter.
-
Challenging the Status Quo in Colorectal Cancer 2024
December 6-8, 2024
-
ESMO 2024 Congress
September 13-17, 2024
-
ASCO Annual Meeting
May 30 - June 4, 2024
-
Yvonne Award 2024
May 31, 2024
-
OncoThon 2024, Online
Feb. 15, 2024
-
Global Summit on War & Cancer 2023, Online
Dec. 14-16, 2023