Do we really want an AI model that cannot properly draw a human hand to make our medical diagnoses?
AI dips its toe into clinical decision support
According to the American Medical Association (AMA), two-thirds of physicians were using healthcare AI in early 2025.
They refer to this technology as augmented intelligence rather than artificial intelligence. Renaming AI βaugmented intelligenceβ reflects the optimistic belief that AI enhances human intellect and decision-making, rather than replacing them. This paradigm assumes that actual doctors will always have the final say in diagnoses and, in the medical insurance realm, in determining medical necessity. To which I say, βIn a perfect world.β
In clinical support, AI automates note-taking during patient visits and analyzes data and images for early detection. AI can also personalize patient treatment plans, manage chronic diseases using wearable devices, and analyze data to assist doctors in decision-making. All this is good, right?
The positive spin is that these smart assistants can reduce doctor and clinician burnout and improve accuracy and efficiency in patient care.
The reality? AI is sometimes mistaken.

