Gurpreet Dhaliwal sat onstage in a lodge ballroom in Minneapolis. The grey curtains behind him had been illuminated by shiny blue lights, giving the slightest trace of efficiency at an in any other case typical medical convention. The presentation was among the many most anticipated on the Society to Enhance Prognosis in Drugs’s 2022 assembly. The attendees had been there to observe a form of showcase: a fancy prognosis in motion.
Dhaliwal, a professor of medication at UC San Francisco, was given the main points of a affected person he had by no means seen earlier than. As one other doctor slowly revealed items of the case, Dhaliwal narrated his pondering out loud: why he was contemplating one risk and rejecting one other, and what every new clue revealed for him. Finally, he determined that the affected person was seemingly affected by a harmful buildup of strain in her stomach. Left untreated, she might expertise organ failure. It was the right prognosis, and the viewers responded with applause.
Dhaliwal is thought to be one of many nation’s most gifted diagnosticians. Colleagues have praised not solely his command of physiology but additionally his capability to make his reasoning legible—to show scientific uncertainty into one thing teachable. “To watch him at work is like watching Steven Spielberg deal with a script or Rory McIlroy a golf course,” a New York Occasions reporter wrote in 2012.
“I recognize the designation however form of reject it, solely due to my very own philosophical stance, which is that it’s very exhausting to grasp the diagnostic course of,” Dhaliwal instructed me after I talked with him for my e-book about prognosis. He considers himself a pupil of prognosis, dedicated to getting higher. “To me, the idea of the grasp diagnostician is that you just’re by no means adequate.”
That perception places Dhaliwal on one facet of a core query of medication: Are some docs inherently higher diagnosticians than others, or is diagnostic excellence a ability that any clinician can obtain? Docs often get it proper—some estimates recommend about 90 p.c of the time. However with roughly 1 billion physician-office visits every year in America, even a low error charge can nonetheless have an effect on numerous individuals. A 2023 research estimated that 371,000 individuals die a 12 months and 424,000 are disabled following a misdiagnosis.
In 2015, the Nationwide Academies of Sciences, Engineering, and Drugs printed a seminal report on diagnostic error with a startling discovering: Most individuals will expertise no less than one (equivalent to a delayed, improper, or missed prognosis) of their lifetime, “typically with devastating penalties.” That report prompted a small however vocal group of physicians and different well being suppliers to look inward. They argue that the variety of diagnostic errors is unacceptable and should be improved. Dhaliwal has been a part of the motion to determine how.
Some analysis means that many, if not most, diagnostic errors come up from failures in pondering—cognitive bias, untimely closure, inadequate reflection. Accordingly, some researchers body diagnostic error as largely an issue in scientific judgment: the power to purpose by way of uncertainty and weigh competing explanations in an effort to attain the suitable prognosis and make selections about care. “Regrettably, how you can suppose in drugs has been a a lot‑uncared for space for medical educators, who stalled someplace within the Center Ages, or a century or two earlier,” Pat Croskerry, a retired professor in emergency drugs at Dalhousie College in Canada who’s identified for his work on cognitive errors within the prognosis, instructed me.
Dhaliwal credit his personal skills to paying shut consideration to his personal pondering. “I do suppose you possibly can prepare your self to be a greater diagnostician,” he mentioned. Early in his coaching, he carefully noticed the physicians he most admired. A few of them had a knack for figuring out uncommon ailments that evaded their friends. Others mastered the prognosis of widespread circumstances so completely that they might acknowledge each permutation of pneumonia. Dhaliwal needed to excel at each.
However when he requested physicians how you can grow to be that form of physician, their recommendation was often the identical: See quite a bit. Learn quite a bit. It felt unsatisfying. Each doctor sees sufferers. Each doctor reads. What, he puzzled, really separates an distinctive diagnostician from a reliable one?
He held on to this query, and about two years after ending residency in 2003, throughout a yearlong faculty-development course for medical educators, he encountered a session on scientific reasoning—an rising area on the time. The doctor and medical historian Adam Rodman has described scientific reasoning as “the research of the power for knowledgeable physicians to see what others don’t.” Researchers had been starting to analyze what truly occurs in docs’ minds once they make diagnoses: how they arrange their data and put it into observe. Dhaliwal rapidly acknowledged this as the standard he had seen in his function fashions, although “they didn’t have a time period for it, and neither did I.” The concept of scientific reasoning helped make clear the method; the following query was how you can get higher at it.
Dhaliwal laid out the important thing steps of a physician’s reasoning course of: accumulating knowledge from a affected person; synthesizing that data; accessing “information” within the thoughts, together with the main points about ailments and the way they current; itemizing attainable diagnoses; and selecting one over others. He additionally started learning the science of experience and the way individuals—whether or not Nobel laureates, Olympic swimmers, or mechanics—grow to be distinctive of their area. “They search out challenges, whereas most of us instinctively attempt to decrease challenges after we’re competent,” he mentioned.
In addition they study from their errors. In a 2017 paper, Dhaliwal wrote that abnormal individuals develop “extraordinary judgment by extracting as a lot knowledge as attainable from their inevitable errors,” a lesson he drew from Philip Tetlock and Dan Gardner’s e-book, Superforecasting: The Artwork and Science of Prediction. However drugs doesn’t make that simple for docs, who could deal with a affected person as soon as and by no means see them once more. If the affected person’s situation worsens, or they obtain a distinct prognosis in a while from another person, that data could by no means make its approach again to the primary physician. With these concepts in thoughts, Dhaliwal got down to sharpen his abilities. Right this moment, he works within the San Francisco VA Medical Middle’s emergency room, the place he sees quite a lot of sicknesses and essentially follows that early recommendation to see a variety of sufferers. However, crucially, he additionally began preserving monitor of his personal circumstances in order that he might comply with up on what occurred. When he discovers he was improper, he tries to determine why. Did he miss one thing vital? Was he exhausted on the finish of an extended shift? Did he anchor himself to a specific conclusion too rapidly?
“I began to get form of hooked on it,” he mentioned. He defined that the thoughts needs closure; with out figuring out the result, individuals are likely to assume that issues turned out effectively. His behavior of monitoring down a affected person’s final result echoes recommendation delivered greater than a century in the past by William Osler, one among fashionable drugs’s founding figures: “Study to play the sport truthful, no self-deception, no shrinking from the reality; mercy and consideration for the opposite man, however none for your self, upon whom you need to preserve an incessant watch.” Diagnostic mastery, Dhaliwal illustrates, is just not a mysterious present bestowed on a proficient few. It’s the results of analyzing one’s personal pondering and observe with out mercy.
However the reasoning that goes into prognosis could begin to look very totally different. Since his third 12 months of medical faculty, Dhaliwal has learn The New England Journal of Drugs’s Clinicopathological Convention, or CPC. The CPC is a instructing train by which docs are offered with an actual affected person’s case and requested to purpose aloud towards a prognosis, just like Dhaliwal’s Minneapolis presentation. Final fall, Dhaliwal participated in a CPC that put him in competitors with an AI agent known as Dr. CaBot, a medical-education software developed by researchers at Harvard Medical Faculty.
Each Dhaliwal and Dr. CaBot reached the right prognosis and defined their reasoning step-by-step. They accurately concluded that the affected person had an issue within the higher a part of his digestive system, which brought about a bacterial an infection to set off sepsis, amongst different problems. Dr. CaBot didn’t determine the reason for the issue, whereas Dhaliwal deduced, accurately, that the person had swallowed a toothpick, which poked by way of his intestine and brought about the an infection. He had seen that form of case earlier than.
That Dr. CaBot’s problem-solving got here as shut because it did to Dhaliwal’s is each promising and disconcerting: It means that machines could possibly match the efficiency of elite diagnosticians. Extra formal proof additionally signifies that giant language fashions could possibly approximate the form of scientific reasoning anticipated of physicians. One research printed in July 2024 discovered that when OpenAI’s GPT‑4 examined the medical data of 100 sufferers in an emergency room, the AI was capable of diagnose them with 97 p.c accuracy, outperforming resident physicians. (OpenAI’s fashions have superior since then.) One other research discovered that ChatGPT scored greater on a clinical-reasoning measure than internal-medicine residents and attending physicians at two tutorial medical facilities. Different research have been extra blended.
Critical considerations about reliability, sycophancy, and hallucinations stay. However in some methods, what a diagnostician does is just not so totally different from what AI claims to do. Each use monumental quantities of data to acknowledge patterns in signs and diagnoses that have a tendency to seem collectively. A physician does this by way of medical training and private expertise; AI does it by predicting believable explanations based mostly on statistical patterns it has discovered from its coaching supplies.
“That is an electrical second in drugs,” Mark Graber, a doctor and co-founder of the nonprofit Neighborhood Bettering Prognosis in Drugs, instructed me. “In the event you can give you an AI agent that’s pretty much as good as Gurpreet Dhaliwal, that’s an incredible accomplishment that may surpass the skills of 99.9 p.c of docs.”
How drugs embraces any of that is an open query. Maybe AI will strengthen clinicians’ reasoning and shut the hole between the Dhaliwals and everybody else. Or it might grow to be a crutch for clinicians, and cause them to lose abilities. A 2025 research discovered that after simply three months of utilizing an AI software to seek out precancerous growths throughout colonoscopies, docs had been much less prone to determine the growths on their very own.
For his half, Dhaliwal is equanimous. “I believe AI goes to rework well being care radically. I don’t suppose it’s going to alter doctoring radically,” he mentioned. He believes that AI is prone to carry out finest on the extremes of prognosis: the quite simple circumstances (equivalent to a poison-ivy rash) and the very complicated ones (uncommon or novel ailments). Within the not-so-distant future, individuals could possibly get solutions to routine medical questions at residence—What’s this spot? Is my cough regarding? How’s my blood strain?—with out ever needing to see a doctor. Which may be fully acceptable, as a result of attending to those on a regular basis considerations often doesn’t require subtle scientific judgment or nuanced determination making.
AI might additionally show beneficial in figuring out circumstances {that a} doctor could by no means encounter of their profession, or in serving to diagnose sufferers which have stumped a number of clinicians. These circumstances are likely to hinge on how encyclopedic a physician’s data of the medical literature is; AI can acknowledge obscure patterns throughout thousands and thousands of circumstances and publications, and floor potentialities that will lie exterior any single doctor’s expertise.
“What I believe is much less prone to change is form of the muddy center, which is what I believe the overwhelming majority of medical observe is,” Dhaliwal mentioned. A lot of medication entails selecting between potentialities: Does an individual have an an infection, an allergic response, or an autoimmune illness? Is it a psychiatric or medical difficulty? AI might definitely assist parse by way of the choices. However medical judgment goes past figuring out what’s almost certainly; it entails deciding what the prognosis means for a specific affected person. Two individuals identified with the identical most cancers could need totally different futures. One might want probably the most aggressive therapy accessible, whereas the opposite could decline interventions that may commerce high quality of life for longevity. These are value-laden selections that, no less than for now, nonetheless require one thing irreducibly human to navigate. An LLM can recite therapy choices and survival charges, however it can’t share duty for the alternatives that comply with.
Counting on AI for sure points of prognosis might assist free docs to concentrate on these extra human elements of the job. In the USA, greater than 100 million individuals don’t have a primary-care supplier, and the career itself is dwindling. “If in some kind AI is ready to beat us, or assist us enhance our capability to do scientific reasoning, you don’t should be the neatest individual within the room to be a doctor, which I believe is healthier for the group,” Jeffrey Goddard, a medical pupil on the College of Iowa who makes use of chatbots in his coaching, instructed me. A prognosis, most easily, is a solution to the query What’s making me sick? However it will probably supply rather more than that—reassurance, coherence, and, in the end, reduction. Not all of that may be outsourced.
This essay was tailored from Alexandra Sifferlin’s e-book, The Elusive Physique: Sufferers, Docs, and the Prognosis Disaster, printed right now.

By Alexandra Sifferlin
Whenever you purchase a e-book utilizing a hyperlink on this web page, we obtain a fee. Thanks for supporting The Atlantic.
