Francisco J. Esteva: Can we truly trust AI in Cancer Care?
Francisco J. Esteva, Chief of Division of Hematology and Medical Oncology at Lenox Hill Hospital, shared a post on LinkedIn:
“How can we truly trust AI in cancer care if we don’t understand its decision-making process?
In oncology, AI’s role is growing—from pinpointing tumors to detecting genetic mutations—but its “black box” nature raises concerns for doctors and patients alike.
Imagine getting a life-changing diagnosis based on an algorithm whose reasoning remains hidden. For healthcare professionals, understanding how AI arrives at its predictions is essential for making informed, safe decisions.
That’s why researchers are working on “explainable AI,” developing tools like SHAP and LIME to make AI’s decision-making process transparent. These tools are helping clinicians see how AI assesses medical images, identifies key features, and pinpoints signs of cancer, offering insights that go beyond the surface.
As we continue to integrate AI into cancer care, transparency isn’t just beneficial—it’s crucial. Explainable AI has the potential to improve accuracy, boost trust, and enable personalized treatment plans, bringing us closer to a future where AI supports rather than obscures critical medical decisions.
Healthcare professionals have an essential role in advocating for transparency, ensuring AI is a partner we can rely on.
Click here to watch the full video.”
-
ESMO 2024 Congress
September 13-17, 2024
-
ASCO Annual Meeting
May 30 - June 4, 2024
-
Yvonne Award 2024
May 31, 2024
-
OncoThon 2024, Online
Feb. 15, 2024
-
Global Summit on War & Cancer 2023, Online
Dec. 14-16, 2023