March, 2025
March 2025
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930
31  
Fabio Ynoe de Moraes: Key takeaways from the FDA’s first Digital Health Advisory Committee
Mar 16, 2025, 15:12

Fabio Ynoe de Moraes: Key takeaways from the FDA’s first Digital Health Advisory Committee

Fabio Ynoe de Moraes, Associate Professor at Queen’s University, shared a post on LinkedIn:

“Navigating the Future of Generative AI in Healthcare: Key Takeaways from the FDA’s First Digital Health Advisory Committee.

The FDA recently held its first digital health advisory committee meeting to explore how generative AI can be safely and effectively regulated in medical devices. With nearly 1,000 AI-enabled devices already authorized – but none using adaptive or generative AI – the agency is grappling with new challenges posed by this rapidly evolving technology.

Here are four key takeaways from the discussion that could shape the future of AI in healthcare:

1. Transparency Matters: Patients Want to Know When AI Is Used

Grace Cordovano, a patient advocate, shared her experience of receiving mammogram results flagged by “enhanced breast cancer detection software.” Without clarity on how AI influenced her care, she faced confusion and unanswered questions.

91% of patients want to know if AI is used in their care decisions.
The committee emphasized the importance of informing patients not just if AI was used, but also how it contributed to their care. Structured feedback mechanisms for patients were recommended as part of the process.

Key Insight: Transparency builds trust. Patients deserve to understand how AI impacts their health decisions.

2. Equity Must Be at the Core of AI Regulation

Generative AI holds immense promise for extending care to underserved communities – older adults, racial and ethnic minorities, and rural populations. However, there’s a real risk of amplifying existing health inequities if the technology isn’t developed and monitored responsibly.

Advisory members stressed that equitable performance across diverse populations should be a requirement , not an afterthought.

Postmarket surveillance for bias and equity gaps was highlighted as critical.

Key Insight: AI tools must be trained and validated on inclusive datasets – and continuously monitored for fairness.

3. Hospitals Are Still Learning How to Use Generative AI Safely

Health systems are cautious about adopting generative AI without robust validation frameworks.

Robert Califf noted that no U.S. health system is fully equipped to validate AI algorithms for clinical use.
HCA Healthcare and others are “tiptoeing” into generative AI, focusing on training staff and understanding model behavior before expanding usage.

Key Insight: While the potential is vast, generative AI isn’t ready for unsupervised clinical decision-making. Rigorous testing and training are essential.

4. Error Detection and Reporting Processes Need Clarity

Unlike traditional AI models, generative AI produces outputs that introduce unique regulatory complexities.

Radiology Partners found a 4.8% error rate in AI-generated radiology impressions, reduced to 1% after radiologist review.
Experts agreed that human oversight is critical, and clear processes must exist for spotting, reporting, and addressing errors.

Key Insight: Defining what constitutes an adverse event or error – and ensuring manufacturers have robust monitoring systems – will be crucial for safe adoption.

As the FDA moves forward, collaboration among regulators, industry leaders, healthcare providers, and patients will be vital to harnessing the transformative power of generative AI while safeguarding public health.

What are your thoughts on these developments? How do you see generative AI shaping the future of healthcare? Let’s discuss in the comments!”

More posts featuring Fabio Ynoe de Moraes.