What Happens To Medicine When Machines Are As Good As Doctors?
IBM defines AGI as the moment “an artificial intelligence system can match or exceed the cognitive ... More abilities of human beings across any task.”
gettyImagine if every physician and nurse had a clinical partner as capable, knowledgeable and reliable as they are. Not a junior resident to supervise or a chatbot that summarizes notes, but an associate capable of solving novel problems, reasoning across specialties and making sound medical decisions 24/7 without burnout or bias.
That day may be closer than most people expect.
Just 12 months ago, when I published the book ChatGPT, MD, I predicted that an autonomously reliable medical AI system was still a decade away. Today, with the emergence of artificial general intelligence, that forecast seems wildly conservative.
IBM defines AGI as the moment “an artificial intelligence system can match or exceed the cognitive abilities of human beings across any task.” Thus, AGI is not a tool, product or program. It’s a milestone.
In terms of immediacy and impact, OpenAI CEO Sam Altman recently said his team is “confident we know how to build AGI as we have traditionally understood it,” predicting it could happen as early as 2025. Anthropic CEO Dario Amodei expects AGI-level capabilities by 2027, and believes tools like Claude will surpass “almost all humans at almost everything.”
Experts may disagree on the exact timeline, but most agree on one thing: AGI is coming soon. Nearly all insiders now believe it will arrive within five years.
And because AGI isn’t a single product — or a switch that flips — it won’t arrive with a bang or be a single technological breakthrough. Instead, it will arrive gradually, the result of year-over-year exponential improvements in generative AI.
In medicine, those gains will produce both clinical opportunities and cultural disruption.
AGI will mark a point when generative AI systems can reason across specialties, apply evolving clinical guidelines and reliably solve complex medical problems without being explicitly programmed for each scenario. An AGI-derived application could integrate information from cardiology, endocrinology and infectious disease to diagnose a patient and recommend treatment with human-level accuracy.
, AGI will challenge the long-held belief that humans are inherently better than machines at delivering medical care. Once AI can match physicians in reasoning and accuracy, both patients and clinicians will be forced to reconsider what it means to “trust the doctor.”
That level of performance will mark a sharp departure from today’s FDA-approved tools, all which rely on “narrow” AI. These applications are designed for single tasks, such as reading mammograms, detecting diabetic retinopathy or flagging arrhythmias. They are programmed to identify small differences between two specific data sets. Consequently, they are limited in breadth of expertise and can’t generalize beyond their training. An AI tool trained to interpret a mammogram, for instance, can’t analyze a chest X-ray.
Generative AI, by contrast, draws from vast sources of information, including medical textbooks, published research, clinical protocols and public data. This breadth will allow future GenAI systems to answer a wide range of clinical questions and continually improve as new knowledge emerges.
Since the release of the first large language models in 2022, GenAI has grown by leaps and bounds relative to power and capability. We’re not at AGI yet. But with recent improvements, the finish line is in sight:
The gap between today’s generative AI capabilities and AGI is narrowing fast. Once that threshold is crossed, medical professionals will face an existential moment.
Already, more than half of clinicians are comfortable using generative AI for administrative and other non-medical tasks: summarizing notes, drafting instructions, retrieving reference information. But few believe these systems can match their own clinical judgment. AGI will challenge that assumption. Once GenAI systems achieve reasoning and pattern recognition equivalent to that of physicians, the line between human and machine expertise will blur.
To understand how different healthcare will be, consider three ways AGI-level performance could improve medical care delivery:
As AI systems approach clinical parity, they won’t just support administrative work. They will transform medical practice itself.
For medicine, the question is no longer, “Will AI replace doctors?” Instead, healthcare leaders and clinicians must ask: How can we best use generative AI to augment clinical care, fill critical gaps and make medicine safer for patients?
Whether GenAI strengthens or destabilizes the healthcare system will depend entirely on who leads its integration. If physicians and current healthcare leaders take the initiative (leveraging AGI-level capabilities to empower patients, enhance decision-making and redesign workflows) both providers and patients will benefit.
But if they waver, others will take the lead. U.S. healthcare represents $5.2 trillion in annual spending. Tech companies, startups and corporate giants all have an interest in capturing a piece of that pie. If clinicians fail to shape the next era of medical care, business executives will. And their priorities will favor profit over patient outcomes.
To avoid that fate, two foundational shifts must begin now:
Making these changes in care delivery will be uncomfortable for physicians, but they’ll be far less painful if doctors start now. The train is coming down the track. We don’t know the exact schedule for AGI. But we know it’s coming. Whether you give care, receive it—or both—the question is: Will you be ready when it arrives?