Vincent Rajkumar, Professor of Medicine at the Mayo Clinic in Rochester and Editor‑in‑Chief at Blood Cancer Journal, shared Hedgie’s post on X, adding:
“AI is not intelligent. It’s analyzing and stringing together content we generated and repackaging it. Yes, we can use AI wisely to our benefit, and make our lives easier.
Like if I’m a doctor, and I need to solve a computer or coding problem, AI is immensely useful and incredibly fast and efficient. But I’m not kidding myself that it’s intelligent. It’s finding and linking and presenting to me information I don’t know that other intelligent humans who do know have generated before.”
Quoting Hedgie’s post:
“A researcher invented a fake eye condition called bixonimania, uploaded two obviously fraudulent papers about it to an academic server, and watched major AI systems present it as real medicine within weeks.
The fake papers thanked Starfleet Academy, cited funding from the Professor Sideshow Bob Foundation and the University of Fellowship of the Ring, and stated mid-paper that the entire thing was made up. Google’s Gemini told users it was caused by blue light. Perplexity cited its prevalence at one in 90,000 people.
ChatGPT advised users whether their symptoms matched. The fake research was then cited in a peer-reviewed journal that only retracted it after Nature contacted the publisher.
My Take
The researcher made the papers as obviously fake as possible on purpose. The AI systems didn’t catch it. Neither did the human researchers who cited it in real journals, which means people are feeding AI-generated references into their work without reading what they’re actually citing.
I’ve covered the FDA using AI for drug review, the NYC hospital CEO ready to replace radiologists, and ChatGPT Health launching this year. All of that is happening in the same environment where a condition funded by a Simpsons character and endorsed by the crew of the Enterprise was being presented as emerging medical consensus. The people making these deployment decisions seem to believe the pipeline from research to AI to patient is more supervised than it actually is. This experiment suggests it isn’t supervised much at all.”
More posts featuring Vincent Rajkumar.