Danielle Bitterman: LLMs might help burnout by drafting responses to patient portal messages in EHRs
Danielle Bitterman, Assistant Professor at Harvard Medical School, shared on LinkedIn:
might help burnout by drafting responses to patient portal messages in EHRs. But is this just an efficiency aid, or do impact clinical judgement in more meaningful ways? In our pre-clinical study, 6 oncologists wrote responses to 100 simulated cancer patient questions about symptoms, along with realistic cancer histories. Then, the oncologists edited GPT4 responses to the same messages.
The content of manual responses was significantly different than the content of GPT4 draft and GPT4 edited responses. GPT4 errors tended to arise not from incorrect biomedical factual knowledge, but incorrect clinical gestalt and identification of the urgency of a situation. This is the type of thing not covered by Medical Question Answering-type benchmarks. What does this mean? We found pre-clinical evidence of anchoring based on
recommendations. Raising the question: Is using an to assist with documentation simple decision-support, or will clinicians tend to take on the reasoning of the ?Human-in-the-loop shouldn’t be the only way we protect against LLM risks, and existing benchmarks don’t give great insight into failure modes and the safety of LLMs for healthcare. Let’s take a careful approach now, to make sure we see the full promise of AI in the future.
Read our study here.”
Source: Danielle Bitterman/LinkedIn
-
ESMO 2024 Congress
September 13-17, 2024
-
ASCO Annual Meeting
May 30 - June 4, 2024
-
Yvonne Award 2024
May 31, 2024
-
OncoThon 2024, Online
Feb. 15, 2024
-
Global Summit on War & Cancer 2023, Online
Dec. 14-16, 2023