Explore UAB

Joel-Cohen

By: Preston Nicely, PGY-2

Artificial intelligence. Few phrases have so thoroughly captured the public fascination as this one. AI is everywhere, from our phones and inboxes to the patient chart. It’s the buzzword of the decade, tossed around in academic discussions and hospital hallways alike. But what is AI, really? Can we trust it? 

At its core, AI refers to computational systems that can perform tasks traditionally requiring human intelligence—like pattern recognition, decision-making, or generating natural language. Yet the boundaries of what counts as “AI” are shifting so rapidly that even experts debate its definition. A generation ago, a simple algorithm predicting glucose trends might have been considered cutting-edge AI. Models like ChatGPT can now synthesize clinical evidence in seconds, while platforms such as NotebookLM can digest late-breaking clinical science and produce a personalized audio podcast for our Tinsley articles of the week. Meanwhile, clinicians are increasingly turning to Open Evidence for rapid, reference-supported clinical guidance, as compared to UpToDate. This explosion of instantaneously accessible medical knowledge is exciting, but it comes with risk. AI tools can generate confidently written—but subtly incorrect—clinical statements. In one now-famous case, a hospital’s triage model designed to predict which patients would benefit from extra care management systematically underestimated illness severity in black patients because the algorithm used health-care cost, not physiological data, as its input. The result? Many sicker patients were mistakenly labeled “low risk.” This episode underscored a truth we cannot ignore: AI can encode and amplify the very biases we seek to eliminate.

At UAB, we are entering an era where AI isn’t a novelty—it’s a necessity. Our soon-to-be EMR—Epic, will come with three new friends: Emmie, Art, and Penny. Functioning as AI chatbots, Emmie will answer patients’ questions about lab results, Art will help clinicians draft patient summaries, and Penny is designed to alleviate administrative burden by aiding in medical billing, coding, and filing for prior authorizations. Soon, pre-rounding may mean reviewing a succinct AI-generated summary of overnight events, trends, and labs—distilled from hundreds of data points. Charting and documentation may become collaborative efforts between clinician and algorithm, blending human nuance with computational speed. However, as physicians, we must remain the final arbiters of truth in clinical decision-making. Blind reliance on AI—even for something as simple as auto-completing a note—risks perpetuating false information. This has enormous implications for the public’s trust in the veracity of science and medicine. 

For trainees and faculty alike, AI literacy is becoming as essential as pharmacology or EKG interpretation. Just as we teach evidence-based medicine, we must now teach algorithm-based medicine: how to interpret, question, and, when necessary, override machine recommendations. AI doesn’t understand ethics, empathy, or context. It doesn’t feel the tension of a difficult goals-of-care conversation or the weight of uncertainty when labs and symptoms don’t align. Those remain uniquely human burdens—and privileges.

As UAB continues to lead in research and clinical innovation, we must also lead in AI. This means embedding critical thinking about algorithms into resident education, ensuring transparency in predictive tools, and encouraging multidisciplinary collaboration between data scientists, clinicians, and ethicists. In the end, AI is not replacing the physician—it’s redefining what it means to be one. How it’s used, and how wisely it’s trusted, will certainly define all our careers and the next era of medicine.