- The Cautionary
- Posts
- Learning AI #15
Learning AI #15
How should AI be used in medicine?
How is AI working in medicine? Describe areas that are working well and mention the ethical issues facing AI use in medicine.
In 2026, AI has transitioned from a promising experiment to an embedded co-pilot in clinical practice.
Well, it looks like the future is here for AI in medicine. Radiology was one of the first medical sectors to be infiltrated by AI. Why? Follow the data.
You felt that crunching sound in you knee trying to be cool playing pick-up basketball, you go the emergency room, and among the first things the medical team does is send you to the x-ray machine.
Radiology relies on a human doctor’s interpretation of an image. In the US, there are more than 3.0 million knee injuries showing up in emergency rooms in a typical year.

AI can process data better than any human. Images are just another form of data.
An experienced radiologist might read images of a few hundred knees per year and over a 30-year career, let’s round up to 10,000 knee images viewed by that experienced doctor you trust with your knee. Sounds like a lot doesn’t it?
But what about the AI? It read all 3.0 million knee images this year, and, oh, throw in the last ten years and now the AI has read more than 30 million knee images. Which do you think will be better at identifying anomalies in your knee image, the doctor or the AI?
This is where AI is the most powerful in medicine and elsewhere: when it can consume huge amounts of data and make sense of it. Images of knees are no different than columns of numbers; its all the same to the AI.
When your body is overwhelmed by infection and its immune system can’t fight back fast enough, you have a condition known as sepsis. Sepsis is the cause of 20% of all deaths worldwide. Pretty nasty.

Sepsis progresses and gets worse. The earlier it can be identified, the better the chances of patient survival.
If sepsis is caught early in its progression before tearing through your body like a tornado, the damage from the infection can be stopped. If stopping sepsis was as easy as watching and acting when an alarm goes off, sepsis would not be the feared killer it is.
AI is better than humans at aggregating data from lab tests, vital signs, and other information from the electronic medical record.
This ability to aggregate and analyze in real time, typically flashes early warnings of developing sepsis well before traditional human monitoring methods. Turn AI loose on the data and save lives.
The biggest ethical problem with AI in medicine is bias created by how AI is built. AI models are trained on huge volumes of data (like the 30 million images of busted knees) and if the data are biased, the results from the AI are biased.
For example, what if most of those images are of males, aged 70 and older. If this is the case, then the AI would not give a reliable read of a teenage girl’s knee, since it has not seen enough of these.
This creates too many situations where the AI has limited training data.These are called “edge cases,” which is a super-important concept in AI. If you are not part of the core demographic that the AI was trained on, the AIs read of your knee image may not be accurate.
Your case is on the edge of what AI can understand and not that different from extreme weather conditions challenging the AI in an autonomous vehicle. If the AI has not seen a few million of them, it cannot produce a reliable output.
I think that medicine-specific AI models will eventually emerge that will utilize good, old-school statistical sampling to feed training data to the AI that better represents the general population. We are not there yet.
AI in medicine is a fascinating topic and one that touches all our lives. But, like autonomous driving and AI-directed warfare, AI in medicine needs to have the human-in-the-loop for decision making, which makes AI more of productivity enhancer and less of a replacement for today’s medical practitioners.
You know how its good to get a second opinion on a medical issue? Well, next time you are in the ER, ask for that second opinion, but tell them you want the AI’s opinion and want it now.
Things I think about
More than half of the world's population lives in Asia.
Response from Gemini
How is AI working in medicine? Describe
areas that are working well and mention the ethical issues facing AI use in medicine.
In 2026, AI has transitioned from a promising experiment to an embedded "co-pilot" in clinical practice. It is no longer just predicting the future; it is actively managing patient flows, designing bespoke treatments, and spotting diseases that human eyes still miss.
Here is a breakdown of where AI is excelling and the ethical minefield it is currently navigating.
Where AI is Working Well
1. High-Stakes Diagnostics
AI’s ability to recognize patterns in massive datasets makes it superior to humans in specific "visual" fields.
Radiology & Pathology: AI tools are now standard for flagging anomalies in mammograms and chest X-rays. For example, in diabetic retinopathy screening, AI systems currently achieve 98% accuracy, often catching micro-vessels before a radiologist sees them.
Predictive Alarms: In hospitals, AI monitors real-time vitals to predict sepsis or cardiac arrest hours before a patient physically deteriorates, giving doctors a critical "lead time" to intervene.
2. The "N-of-1" Revolution in Personalized Medicine
We have moved beyond "one size fits all" medicine.
Genomic Mapping: AI can now analyze an individual’s entire genome to recommend treatments for specific cancer mutations.
Digital Twins: Surgeons are increasingly using "digital twins"—virtual 3D models of a patient’s specific organ (like the heart)—to simulate a surgery before ever picking up a scalpel. This allows for hyper-precise planning and reduces intra-operative complications.
Drug Discovery: AI has slashed the time to develop new drugs by predicting how proteins fold and how specific molecules will bind to targets, leading to "N-of-1" drugs—medications literally designed for a single person’s biology.
3. Reducing "Clinical Burden"
The most immediate benefit for doctors hasn't been a "cure" but a reduction in paperwork.
Ambient Scribing: AI listens to patient-doctor conversations and automatically drafts clinical notes, allowing doctors to look at the patient instead of their keyboard.
Workflow Optimization: AI manages hospital bed availability and predicts patient discharge dates, significantly reducing wait times in emergency rooms.
Ethical Issues & Challenges
Despite its success, the integration of AI in 2026 faces significant ethical scrutiny.
1. Algorithmic Bias
This is perhaps the most pressing issue. If an AI is trained primarily on data from one demographic (e.g., Caucasian patients in urban centers), its "success" may not translate to others.
Impact: Biased algorithms can lead to underdiagnosis in marginalized communities or recommend less aggressive treatments for certain groups based on flawed historical data.
2. The "Black Box" Problem
Many advanced AI models are so complex that even their creators cannot explain why a specific diagnosis was made.
The Dilemma: If a doctor follows an AI's recommendation to perform a risky surgery, but cannot explain the reasoning to the patient, it violates the principle of informed consent.
3. Accountability and Liability
When an AI makes a mistake, who is at fault?
The Legal Gap: Is it the software developer, the hospital that bought it, or the doctor who followed its advice? In 2026, legal frameworks are still struggling to define "AI malpractice," often leaving clinicians in a defensive position.
4. Data Privacy & "De-identification"
AI requires mountains of data to learn. While patient data is "anonymized," researchers have shown that AI can often "re-identify" patients by cross-referencing multiple datasets. This raises massive concerns regarding patient confidentiality and the potential for insurance companies to use this data to adjust premiums.
5. Skill Atrophy
There is a growing concern among veteran physicians that younger doctors—now entering a field where AI handles 80% of the initial differential diagnosis—may lose the "clinical intuition" required to spot rare cases when the AI fails or the power goes out.
The current consensus in 2026 is that AI should be "Augmented Intelligence" rather than "Artificial Intelligence"—a tool that enhances human judgment rather than replacing it.
Which of these areas—the technical breakthroughs or the ethical
Response from ChatGPT
How is AI working in medicine? Describe areas that are working well and mention the ethical issues facing AI use in medicine.
AI is no longer hypothetical in medicine—it’s already delivering measurable value in a few specific, high-impact areas. The pattern you’ll notice: it works best where data is structured, outcomes are measurable, and the task is narrow.
Where AI is Working Well
1. Medical Imaging (Radiology, Pathology, Dermatology)
6
This is the clearest success story.
AI systems can detect tumors, bleeds, fractures, and lung nodules with accuracy comparable to specialists in some cases.
In pathology, AI can scan slides for cancerous cells faster than humans.
Dermatology apps can flag suspicious skin lesions for further review.
Why it works: images are standardized, abundant, and labeled.
Real impact: faster triage, fewer missed findings, reduced workload.
2. Predictive Analytics in Hospitals
6
Hospitals are using AI to anticipate problems before they become emergencies.
Early warning systems for sepsis, cardiac arrest, or patient deterioration
ICU monitoring that flags subtle changes humans might miss
Readmission risk prediction models
Why it works: continuous streams of structured data (vitals, labs, history).
Real impact: earlier intervention → better outcomes and lower costs.
3. Drug Discovery and Development
5
AI is shortening timelines in pharma.
Identifying promising drug candidates faster
Predicting how molecules will behave
Accelerating protein structure prediction (e.g., breakthroughs like AlphaFold)
Why it works: massive datasets + computational pattern recognition
Real impact: reduced R&D cost and time (though still early-stage in clinical payoff)
4. Clinical Documentation & Workflow Automation
5
This is quietly one of the biggest wins.
AI “scribes” transcribe doctor–patient conversations
Automated coding and billing support
Summarization of patient histories
Why it works: language models excel at transcription and summarization
Real impact: reduces physician burnout and administrative burden
5. Personalized Medicine & Treatment Recommendations
9
AI helps tailor treatments to individuals.
Matching cancer patients to targeted therapies
Analyzing genomic data for risk prediction
Optimizing medication choices
Why it works: combining multiple complex data types (genomics + clinical)
Real impact: more precise, effective care (especially in oncology)
Where AI Is Not Working Well Yet (Important Context)
Fully autonomous diagnosis (still risky without human oversight)
General-purpose “doctor AI” replacing clinicians (far from reality)
Many tools fail when deployed outside the environment they were trained in
Ethical Issues and Risks
1. Bias and Inequality
AI models reflect their training data.
If trained on non-diverse populations, they perform worse on minorities
Can worsen disparities in care rather than reduce them
Example: imaging tools that underperform on darker skin tones
2. Explainability (“Black Box” Problem)
Many AI systems can’t clearly explain their decisions.
Hard for physicians to trust recommendations
Difficult to defend decisions legally or clinically
3. Data Privacy and Ownership
AI requires huge datasets.
Patient data is highly sensitive
Questions about who owns and monetizes that data
Risk of breaches or misuse
4. Accountability and Liability
If AI makes a mistake, who is responsible?
The physician?
The hospital?
The software company?
This is still unresolved in many jurisdictions.
5. Overreliance and Deskilling
There’s a real risk clinicians become too dependent.
Reduced diagnostic skills over time
“Automation bias” (trusting AI even when it’s wrong)
6. Informed Consent
Patients often don’t know when AI is being used.
Should patients be explicitly informed?
Should they have the right to opt out?
Bottom Line
AI in medicine is already valuable—but narrow:
It excels at pattern recognition, prediction, and automation
It struggles with judgment, context, and accountability
The near-term winners aren’t “AI replacing doctors”—they’re AI augmenting high-friction parts of healthcare, especially imaging, prediction, and admin work.
If you want, I can go one level deeper—e.g., which specific companies/tools are actually working in hospitals today or where the ROI is showing up financially.
**********