Be A Health Architect
Be A Health Architect Podcast
Overcoming Physician Burnout as Health Architects: Episode 13.
0:00
-15:36

Overcoming Physician Burnout as Health Architects: Episode 13.

AI and the Health Architect: Can Artificial Intelligence Help with Physician Burnout?

For your information, if you prefer reading, I invite you to look at the written post, which also includes embedded links for references. Also, a disclosure: I have no conflicts of interest or disclosures regarding the products cited.

Last time, we talked about how medical information after training years needs attention and focus when learning more information. This is because we know that being a physician requires a lifelong learning plan.

It’s likely no surprise to anyone that in the last few years, the power of computing and data systems has provided us with a new, powerful ally: AI (Artificial Intelligence). AI technology has affected basically every industry, including medicine. So, as we discuss how to overcome burnout as physicians, and Health Architects, many may be rightfully asking the following: does AI fit into the equation of overcoming burnout as a Health Architect? And if so, how?

The short answer is yes – well, maybe. Health Architects include elements of AI in clinical work, but they also exercise care and caution with it. In this episode, we will discuss what AI generally encompasses and what areas have the highest utility and promise for the Health Architect. In the next episode, we’ll dive into why we can’t rely on these tools completely and must be wary about using them at all.

What is AI?

AI, or Artificial Intelligence, refers to the ability of machines or computer systems to perform tasks that typically require human intelligence. Intelligence is not just knowledge – it’s the ability to apply existing knowledge creatively to a new problem. Intelligence is more than learning facts: it includes learning, problem-solving, reasoning, recognizing patterns, understanding language, and making decisions. Imagine trying to design an airplane. It takes intelligence to bring knowledge of physical parts, the physics of flight, fuel combustion, and myriad other scientific facts and mechanistic processes to devise and create a vehicle that can take off and stay in the air. The same is true of intelligence and medicine. We need scientific knowledge of human anatomy and physiology to understand medical concepts, but we need intelligence to personalize and apply those rules creatively and comprehensively to allow a sick person to get better and recover.

In the world of living beings, humans stand out because we excel at intelligence, compared to our animal and plant peers. But that was until we invited computers to the party. Data systems have now advanced to the point that some speculate that they can not only match human intelligence (also known as Artificial General Intelligence), but even surpass it (also known as Superintelligence). Philosophers and scientists still debate about whether superintelligence is even realistically achievable, but most agree that if and when it does, humans are kind of screwed, since our needs – even our existence – may evaporate as afterthoughts (this is the famous paper clip thought experiment derived by philosopher Nick Bostrom, in which a superintelligent AI whose goal is as arbitrary as making paper clips will aim to remove all obstacles – including human existence – without a second thought. Said differently, we may become bugs on the windshield of an AI-powered computer that speeds down the highway to whatever destination it thinks is most important.)

But I digress – let me reel myself back from the dystopic apocalypse and focus on where AI can positively help physicians today and incorporate it into the burnout equation.

In brief, the current applications of AI are somewhat limited still. Sure, we can create fancy pictures on our phones, use GPS to navigate around the globe, and ask ChatGPT to create an impressive travel itinerary, but the revolutionary transformation of every industry predicted after the initial hype of ChatGPT 3.5, when it arrived in late 2022, has somewhat stalled. Why? Two main reasons: first, most companies are still dealing with bad data that limit the reliability of what AI engines recommend, and second, figuring out how to integrate AI safely and effectively into existing processes isn’t straightforward. Medicine is also struggling with these challenges.

That said, AI has two general applications that are currently in vogue:

  • Narrow (weak) AI: AI systems designed to perform a specific task. I’ll discuss this more below, but in general, examples include image analysis and recognition, voice transcription, and general recommendation systems.

  • Generative AI: A type of AI that can create new content such as text, images, music, or code based on patterns it has learned from existing data.

While most institutions have sagely elected to walk before they run, narrow AI applications in medicine are plentiful and growing. They are not universally available, and the accuracy and reliability vary widely, but some leading and representative examples include the following:

1. Medical Imaging & Diagnostics

  • Radiology: AI algorithms analyze X-rays, CT scans, and MRIs to detect abnormalities like tumors, fractures, or hemorrhages.

  • Dermatology: AI models classify skin lesions and predict malignancy from images acquired.

  • Pathology: AI assistants analyze pathology slides to identify cancer cells or other diseases.

2. Clinical Decision Support Systems (CDSS)

  • AI-driven tools that provide recommendations to physicians based on patient data, clinical guidelines, and evidence-based medicine practices. Examples include sepsis prediction algorithms or risk calculators for stroke and heart disease.

3. Natural Language Processing (NLP) for Medical Records

  • Extracting relevant information from unstructured electronic health records (EHRs), identifying clinical trends, flagging missing documentation, or aiding in clinical coding.

4. Predictive Analytics

  • AI models can predict outcomes like hospital readmission risk, likelihood of disease progression, or ICU patient deterioration based on historical and real-time patient data.

5. Virtual Health Assistants & Chatbots

  • AI can triage patient symptoms, provide health education, remind patients to take medications, or answer routine medical questions.

6. Personalized Medicine

  • AI algorithms that analyze genomic data to recommend tailored treatments, particularly in oncology and pharmacogenomics

7. Operational Efficiency

  • AI optimizing scheduling, resource allocation, and supply chain management within healthcare systems.

8. Robotic Process Automation (RPA)

  • Automating administrative tasks like claims processing, billing, and appointment scheduling, reducing human workload hours and errors.

Generative AI, which many narrow AI systems employ, has particular appeal in these areas but also as a separate tool. Almost 50% of the public now uses one of several generative AI-based assistants like Microsoft’s ChatGPT, Anthropic’s Claude, Meta’s Llama, Google AI’s Gemini, and other large language models (LLMs) that leverage the attractive conversational power of AI. The growth has been astronomical: when Open AI’s ChatGPT launched in 2022, it reached 1 million users in 5 days; by February 2025, there were 400 million users. From a business standpoint as well, pretty much every big tech company worth its salt and a slew of other startups have crossed the AI frontier. Now, users can do everything from generating customized videos of never-seen-before oddities to writing detailed research reports that could give analysts at McKinsey a run for their money.

At its core, AI uses algorithms, data, and computational power to mimic aspects of human cognition. Most of these algorithms leverage the power of prediction by examining the relationships between inputs and outputs. Who hasn’t looked at the sky, seen bulbous gray clouds, and known that it’s going to be a torrential downpour? Likewise, what physician hasn’t seen an older, overweight white man complain of chest discomfort in the middle of his chest that gets worse with exertion and had alarm bells go off that this is a heart attack? To go back to Episode 5 of this series, predictive algorithms, whether in our brains or in computers, convert possibility into probability.

Some of this is amazing and is radically changing medical practice as we speak by surpassing human ability itself. For example, AI can predict the likelihood of developing sepsis, a life-threatening infection, or diagnose skin cancer better than physicians can. Rather than having to shlep to the library stacks or scour online resources for salient articles, AI tools like OpenEvidence can do the same in seconds. Rather than wait hours or days for a dictation service to send us reports back, computer-based language software can transcribe our voices in real-time into notes in minutes and even spell “lymphangioleiomyomatosis” on the page without a hiccup. Computer-assisted coding helps us make more accurate bills for patient encounters. Smart alerts can instantly prevent us from ordering the wrong test or medication dose. What used to take minutes or hours takes seconds or minutes – and with fewer errors.

AI and Burnout

This is all well and good, but how can AI be used by Health Architects in the battle against burnout? How can Health Architects effectively approach AI and use it to our advantage as we face burnout?

The first step is awareness of what AI is even out there and how it relates to our particular practice. Look for opportunities to discover AI applications in medicine, especially from journals in your particular field (for example, I subscribe to JAMA+ AI which offers a free weekly recap on what’s happening in AI in internal medicine). Also, attend CME courses and specialty society conference sessions on AI (not just the latest drugs and medical devices). What’s more, many medical societies offer AI toolkits in respective fields of practice. Use them! Like trying a new surgical technique or antibiotic, we need to experiment. Try AI out and see what works and what doesn’t – and take notes. Remember: don’t let perfect be the enemy of the good.

Second, connect with your institution’s innovation team (most have dedicated teams at this point) to see what initiatives are being planned and implemented enterprise-wide, including AI. In my own institution, I joined a committee of administrators and physician peers to improve coding and billing compliance using technology-based tools – not only did this improve my granular understanding of my division’s productivity, but it also allowed me to participate proactively in bottom-up solution-building rather than reactively receive them top-down only (and this includes weighing in on AI products and services offered by various vendors the institution is considering).

Third, talk to your administrators about your ideas and experience, including with AI. Don’t work in silos. For my part, I constantly talk to my administrative counterparts to make tweaks to our telemedicine program and electronic billing platform to improve operational efficiency, and I use AI assistants to help me think of out-of-the-box ideas for process improvements and/or to provide potential verbiage for protocol updates, rather than writing them from scratch. Engaging with administrators in this way makes finding solutions feel like a joint – rather than solitary – task.

Finally, talk to peers within your networks. You’d be surprised at how much you can learn from others – plus you’ll know where to go for answers. In my case, I had read about generative AI like Chatgpt in newspaper articles for months prior to early 2022, but it was only over dinner one night with a friend and pulmonary colleague that I learned what ChatGPT could really do – he showed me how it could create bullet points for a hypothetical presentation on chronic cough or generate pristine verbiage for a putative research proposal on lung cancer (and he did it all in seconds). It was then that I fully grasped the power of AI assistants and generative AI, and I started speculating about what it could do for me and my burnout.

By being better AI-equipped and integrated with others to use AI, Health Architects feel more competent by finding reliable information faster, get more work done more efficiently, and dedicate increased bandwidth for personalizing plans for patients. They spend less time on menial tasks and more time on integrative, innovative, and collaborative activities that are much more fulfilling. Strategizing the cognitive load and enhancing collaboration with others augment a sense of agency and human connection to mitigate and overcome burnout.

Take-Home Point: AI holds real promise to reduce physician burnout by streamlining clinical tasks, enhancing decision-making, and improving efficiency. However, Health Architects must stay informed and engage thoughtfully and collaboratively to ensure AI augments rather than replaces the human elements of care.

But despite the performance and promise that AI brings, we need to pause before saying game over. Why? Like with any new technology that is still maturing, there are substantial reality checks to consider. And that will be the subject of my next post, so I will see you then.

Discussion about this episode

User's avatar