Douglas Flora: Why Not Imagine a World Where the Best Care Is Not the Exception, But the Rule?
Douglas Flora/medium.com

Douglas Flora: Why Not Imagine a World Where the Best Care Is Not the Exception, But the Rule?

Douglas Flora, Executive Medical Director of Yung Family Cancer Center at St. Elizabeth Healthcare, President-Elect of the Association of Cancer Care Centers, and Editor in Chief of AI in Precision Oncology, shared a post on LinkedIn:

“Every headline warns what AI might take. Almost none ask what it could give.

‘AI is not merely a general-purpose technology — it is an inventor of inventions.’ — James Manyika

Ruth Porat closed her keynote address last summer at the 2025 annual meeting of the American Society of Clinical Oncology in Chicago with two words: why not? She borrowed them from Demis Hassabis, who used them to justify predicting the structure of two hundred million proteins before most scientists believed it was possible. She then built three questions from them — questions that belong at the close of this piece, where they will land with full force.

She is the President and Chief Investment Officer of Alphabet, a two-time cancer survivor, and on that morning in May she was the only person in a room full of oncologists willing to ask out loud what the field should have been asking for years. Those of you who read my work here or on LinkedIn know where I stand: AI optimist, patient advocate, someone who believes the distance between where we are and where we could be is a source of direction rather than despair. Almost no one is counting the right ledger — the cancers caught earlier, the crashes prevented, the diagnoses that arrive in time to matter. Medicine should be asking these questions. Mostly, it isn’t. That troubles me.

Last Monday, I was a guest at a private Speakers Dinner in Washington. The room had the particular energy of a conversation that knows it won’t be repeated.

Around the table sat CEOs, think tank directors, Nobel laureates — the kind of people who have stopped being impressed by one another and are therefore genuinely listening. What I felt, sitting among them, was not the weight of the room but the quality of its commitment. These were people who had spent careers thinking seriously about hard problems, who had chosen, at this particular moment of maximum disruption, to think harder rather than wait. Nobody was theorizing. They were building.

James Manyika, Google’s Senior Vice President of Research, Technology, and Society, kept returning to a phrase with the patience of someone who had watched the simpler version of this idea do real damage when misunderstood: inventor of inventions. Not what AI does today. What it makes possible in the generation after today — the cascade, the recursion, the way a discovery becomes the scaffolding for questions that couldn’t yet be asked. I was trying to capture it on my phone, under the tablecloth, without losing the thread.

I was, as far as I could tell, the only oncologist at the table.

Nobody announced this. It became apparent slowly, the way it does when you’re the only person in a room translating everything into a different language as the conversation unfolds.

Every prior general-purpose technology — steam, electricity, the internet — shared a structure: a power source applied broadly, multiplying human productivity wherever it was pointed, reshaping entire economies without participating in its own extension. Steam powered the mill and the locomotive. Electricity powered the factory and the operating room. The internet connected everything that could be connected. None of them could help design what came next. A steam engine cannot invent a better steam engine. Electricity cannot discover new applications for electricity. They were passive at the frontier of their own recursion. AI is not. That is the category claim Manyika was drawing toward — why inventor of inventions describes something genuinely new in kind, not merely in scale.

The proof had already arrived before most people were ready to receive it. DeepMind’s AlphaFold, which in 2020 predicted the three-dimensional shapes of roughly two hundred million proteins in a matter of months, made legible a vast class of drug targets that structural biology had simply given up on. Not because the targets weren’t real. Because we couldn’t see them. The shape was the problem. AlphaFold gave us the shape. Within three years, pharmaceutical companies were designing molecules against proteins long listed as undruggable. The tool didn’t just solve a problem. It dissolved the category the problem had been stored in.

In cancer genomics, the recursion continues. Sequencing a tumor’s genome tells us which mutations drove its growth; AI is beginning to read the evolutionary logic beneath — which resistance mutations will emerge under treatment pressure, which pathway the cancer will route around and when, which patient is progressing silently while her scans still look clean. Each answer generates the next question. The frontier moves.

What struck me most over those two days in Washington was hearing what the people in that room were actually doing — not just describing. Making extraordinary bets on AI infrastructure, committing the capital and patience required to build something whose full consequences won’t be legible for a generation. Designing programs to upskill and reskill workers across industries far removed from technology, so that the benefits of this transition compound broadly rather than narrowly. Maggie Johnson, who leads Google.org, put it with a precision worth keeping: the goal shouldn’t be teaching people AI tools, she said, but cultivating enduring skills. The tools change. The judgment required to evaluate them doesn’t. These were not people talking about the power of the possible. They were the people building it, often thanklessly, with the kind of deliberate urgency that doesn’t photograph well.

At lunch, I found myself sitting across from Michael Spence — who won the Nobel Prize in Economics in 2001 for his work on signaling, the way information asymmetries shape markets and what it costs when one party to a transaction knows vastly more than another — alongside Fabien Curto Millett, Google’s Chief Economist. Spence offered the most useful single sentence I heard all week: use AI with an open but critical mindset, he said; use it as an accelerator of learning, ask hard questions and evaluate the answers. He was not describing a technology strategy. He was describing an intellectual posture, the same one that distinguishes a careful clinician from a credulous one. Millett characterized the current moment with a word I found precise and honest: jagged. Past pure experimentation, but still in the early innings of full systemic adoption. J.P. Morgan estimates an 8 to 9 percent increase in U.S. GDP over the next decade from AI adoption — trillions in economic uplift — and yet the technology has moved far faster than the institutions built to deploy it. In thirty minutes, a Nobel economist and a chief economist had named the problem I watch play out every week in hospital hallways. This is what happens when disciplines cross without first asking permission.

The press has been running one story about artificial intelligence for three years, and it is not a wrong story: the displacement is real, the disruption is real, the workers who spent decades building expertise watching machines absorb the tasks that organized their professional lives are real people facing genuine losses. That story deserves the coverage it receives.

Here is the other ledger.

We are still finding too many cancers too late. We are still treating with drugs that are effective against the tumor and corrosive to everything surrounding it. We maintain a long list of molecular targets called undruggable — a word that deserves its honest translation: we have stopped trying. The radiologist in a rural hospital reads chest CTs without an AI trained on twelve million images that would catch the nodule she might miss at the end of a fourteen-hour shift. The community oncologist makes genomic treatment decisions without the computational infrastructure that a major academic center treats as routine. The patient who will be diagnosed next year with pancreatic cancer at stage four had stage one already present on a scan eighteen months earlier, when nobody had the tools to interpret it correctly.

What if this goes right?

The pancreatic cancer is caught at stage one, because an algorithm flagged what the exhausted radiologist almost missed. The resistance mutation in a patient’s tumor is anticipated before the first drug fails, so the second is ready. The forty-minute prior authorization hold eliminated, and the forty minutes returned to the room where a patient needs someone to explain what ‘metastatic’ means. The word ‘undruggable’ retired from the clinical vocabulary the way ‘unsequenceable’ was retired after the Human Genome Project — not because the targets disappeared, but because we built the tools to see them.

That is the story the coverage is missing. That is the ledger nobody is keeping.

Ruth Porat offered four words at the Washington Forum that have not left me. She said: “Responsible execution is a choice.” A diagnostic algorithm trained exclusively on data from major academic medical centers encodes, with perfect efficiency, every historical barrier that determined who accessed those centers in the first place. The model reflects the choices of its architects — their training data, their outcome labels, their definition of what recovery looks like. Matt Renner, President of Google Cloud, put the mandate plainly: AI is not a peripheral experiment, he said, but the new intelligent operating system for your enterprise. In medicine, enterprise means every decision about which patients reach which tools. The architects are us.

The Forum was about AI and the economy. From where I sat, the economy was also a man in his sixties asking whether he should fly to his daughter’s wedding next month, given that his neutrophils are low and the trip will exhaust him, and whether it matters. That question has no correct answer in any dataset. It requires someone who has learned to hold uncertainty without flinching, who can speak honestly without cruelty, and stay in the room with whatever the answer turns out to be. The technology does not generate that capacity. The technology returns the time to practice it.

I came home from Washington more determined than I was when I arrived. The people I sat with in that room are spending their careers working to close the distance between where we are and where we could be, and finding it clarifying rather than discouraging. Curious people. Solution builders. Courageous enough to make bets on a future they may not live to fully see, selfless enough to care about who shares in it.

A specific kind of compound interest accrues when you surround yourself with the genuinely curious. It does not appear in any economic model. It shows up on Monday morning, when you walk back into the clinic.

During COVID, we discovered something about ourselves that most institutions spend careers hoping never to be tested. Medical centers that had deliberated for years over protocol changes rewrote them in days; regulatory agencies that had moved in months compressed their timelines into weeks; health systems that had resisted telehealth for a generation deployed it at national scale before the first wave had crested. When the urgency was undeniable and the death toll arrived on every front page, every morning, cumulative and impossible to ignore, we found we were capable of something far closer to the speed the moment required.

Cancer’s death toll is just as real. It simply doesn’t arrive all at once, in numbers that make the front page. It arrives one person at a time, each death its own private emergency, distributed quietly enough that the institution never feels the cumulative weight the way it felt a pandemic. The capacity for urgency did not leave us. We stopped applying it the moment the crisis became invisible again.

The experimentation phase is over. We know the tools work. We have the studies, the pilots, the published outcomes — the evidence is not the bottleneck. What medicine lacks is not proof but will: leaders from inside the field willing to ask the same questions Porat asked in Chicago, not from a stage, but in the room where an AI solution just got tabled for the fourth consecutive quarter. That room exists in every health system in this country. Someone has to decide what happens in it.

Last summer in Chicago and last Monday in Washington, I was taking notes on the same argument from two different angles. In Chicago, it was Porat on a stage. In Washington, it was Manyika at a dinner table, and me under the tablecloth, trying not to miss anything. The argument was the same in both rooms: the tools exist, the recursion is real, the frontier is moving — and the question is not whether medicine will be transformed but whether medicine will choose how.

Porat’s three questions. Here, in full, as she asked them:

Why not imagine a world where earlier detection is available everywhere?

Why not imagine a world where the best care is not the exception, but is the norm?

Why not bring the word “manageable” — and the word “cured” — to millions of people?

‘Manageable’ was her word for her own cancer — two diagnoses, Memorial Sloan Kettering, Cliff Hudis beside her then, her friend now, the same man who runs ASCO today. She placed it before “cured” deliberately, because she understands the exact distance between the two. Most patients are not at Memorial Sloan Kettering. Most don’t have Cliff Hudis. Most live somewhere in the gap between what medicine can do at its best and what it delivers on a Tuesday afternoon in a community hospital — with an exhausted radiologist, a four-week wait for genomics, and a prior authorization hold running forty minutes while the patient in the room asks whether he should fly to his daughter’s wedding.

He is the reason the questions matter. Not as aspiration. As a clinical mandate, with tools already attached.

Porat asked them from a stage in Chicago to a room that should not have needed to hear them from her. A question does not save lives by being asked beautifully. It saves lives when a health system leader hears it on Monday, and on Tuesday cancels the fourth consecutive postponement of the AI adoption review, and starts.

AlphaFold dissolved the word ‘undruggable.’ COVID dissolved the belief that health systems cannot move fast. The only thing left to dissolve is the assumption that the urgency belongs to someone else.

Why not?

Douglas Flora: Why Not Imagine a World Where the Best Care Is Not the Exception, But the Rule?

More posts featuring Douglas Flora.