AI in Healthcare

If medical advice sounds calm, confident, and professional, we tend to believe it — even when it’s incorrect. That instinct, harmless in everyday life, can be dangerous in healthcare.

Google Maps (Trusted, Even When Wrong)

Most of us have followed GPS into a wrong turn. We hesitate—then obey.
The voice is calm. The directions are precise. So, we trust it.

AI (Artificial Intelligence), such as ChatGPT, Perplexity, Gemini, etc., in medicine work the same way: A confident tone, clear steps, and an assumed correctness that can be dangerous.

When Shoulder Pain Wasn’t Just a Gym Injury

A middle-aged man developed shoulder pain after a gym workout and sought advice from ChatGPT, which suggested shoulder bursitis, describing it as a common inflammatory condition related to overuse.

Reassured by the explanation, he rested, applied ice, and took anti-inflammatory medication.

Two weeks later, the pain persisted. Concerned, he finally consulted a physician.

ECG was abnormal, prompting urgent evaluation. Coronary angiography revealed a 95% blockage of the left coronary artery, requiring immediate intervention.

The initial symptom was not a sports injury — it was referred cardiac pain.

When a Simple Symptom Becomes a Crisis

In another case, a middle-aged man developed a mild headache after poor sleep and a long workday. There was no fever, vomiting, or neurological symptoms.

AI warned that the headache could indicate a brain tumour, or brain bleeding and advised urgent neuroimaging.

Alarmed, he rushed to the emergency department.

The diagnosis was simple: Headache due to stress and sleep deprivation.

A good night’s sleep and hydration resolved it.

The harm wasn’t physical — but it was real: unnecessary fear, emergency visit and tests.

A recent study published in NEJM AI (New England Journal of Medicine AI) in 2025 titled “People Overtrust AI-Generated Medical Advice Despite Low Accuracy” examined how people respond to medical advice written by artificial intelligence compared with advice written by real doctors.

The findings were striking — and concerning.

What They Found

People Couldn’t Tell AI from Doctors

Participants correctly identified whether an answer came from a doctor or AI only about 50% of the time — essentially guessing.

Even more concerning: They were confident in their guesses — whether they were right or wrong.

Wrong AI Answers Were Trusted as Much as Doctors

AI answers that doctors rated as low accuracy were still perceived by the public as: valid, trustworthy, complete and satisfactory.

In many cases, these incorrect AI answers performed just as well as — or better than — real doctors’ responses.

People Were Willing to Act on Incorrect AI Advice

Participants reported they were just as likely to follow low-accuracy AI advice as they were to follow a doctor’s advice, including:

  • Acting on the recommendation
  • Seeking (or not seeking) further medical care based on it

This means inaccurate AI advice didn’t just sound good — it changed behaviour.

Confidence and Clarity Beat Accuracy

AI responses were often rated as easier to understand, more thorough and more reassuring even when the medical content was incomplete or wrong.

In healthcare, believability mattered more than correctness.

Why This Matters for Patients

This study shows that the biggest risk of medical AI isn’t just error.

It’s an error that sounds confident, calm, and professional.

AI doesn’t hesitate.
It doesn’t say “I’m not sure.”
It rarely emphasises uncertainty or rare but serious alternatives.

The Takeaway

Artificial intelligence can serve as a valuable tool for education and general support.
However, in matters of health, confidence alone is insufficient.

The expertise of a qualified medical professional remains essential.

Dr. Asik Ali

Dr. Asik Ali
Consultant Radiologist,
Kauvery Hospital, Chennai

Kauvery Hospital