Roupen Odabashian, Hematology/Oncology Fellow at the Karmanos Cancer Institute, shared a post on LinkedIn:
“The Citation Problem Nobody’s Solving in Clinical Decision Support
You ask a CDS tool a question. It gives you an answer with 5 references. Great. You click the reference. You get the full paper.
Now what? Where exactly did the answer come from?
You’re staring at 15 pages of dense medical text, hunting for the sentences the AI used to form its conclusion.
This is broken.
Here’s what I realized using Claude Code to analyze data in Visual Studio:
When I asked where a specific insight came from, it pointed me to the exact cells in the Excel sheet. Row 47, Column D. No guessing.
But in clinical decision support? We’re still stuck with “See: Smith et al., 2023” and a link to a 20-page PDF.
The gap is obvious:
Structured data → AI can point to exact locations
Unstructured text → AI gives you “somewhere in this paper, good luck”
Why this matters for patient care:
Physicians don’t have time to hunt through references. If they can’t quickly verify where an AI recommendation came from, one of two things happens:
They trust it blindly (dangerous)
They ignore it entirely (wasteful)
Neither is acceptable.
The fix isn’t complicated:
Imagine clicking a reference and seeing the exact sentences highlighted—the specific lines the AI used to generate that answer. Google did that at what point before AI when we did simple old way Google Search
Not “this paper supports the recommendation.”
But “these three sentences on page 7 are why I said this.”
The companies that solve source-level highlighting will win. Because trust in AI isn’t about accuracy scores. It’s about showing your work. The next level of clinical decision support isn’t better answers. It’s transparent answers.
Who’s building this?”
More posts featuring Roupen Odabashian on OncoDaily.