top of page
Search

“What Do Doctors Offer that AI Can’t?” May Be Your Most Important Question Right Now

Updated: Nov 14

AI is rolling out in medicine faster than most of us can process. Ambient scribes documenting visits. Clinical decision support algorithms. Automated prior authorizations. The promises are compelling—reduced clerical burden, more face-time with patients, less burnout.


I wanted this. As a palliative care doctor and director of physician well-being at my institution, I've spent years watching colleagues drown in documentation, log into the EHR at ungodly hours, and burn out from the relentless task load. When AI tools promised relief, I advocated for them.


And now it's happening. My health system, like many across the country, is about to scale AI scribes and other tools. Leadership is bringing well-being champions into the conversation. They seem to be genuinely trying to help us do our jobs.


But something feels unsettled. And I'm not the only one feeling it.


The Unasked Question


The AI rollout is happening. But there's a question underneath all of it that institutions aren't making space for—and maybe can't make space for.


Last week, I attended a virtual discussion on AI in healthcare with fellow palliative care clinicians from all over the country. From the initial set of introductions, I immediately clocked that this conversation would be deeper than just "strategies to implement AI note-writing;" this was a meeting of individuals who had already begun contemplating the existential. (Would you expect anything else from a group of palliative care folks?)


Because we all felt it—that tension between promise and threat. The promise is real: AI could free us from documentation drudgery, give us back hours. But the fear is also real. What if instead of giving us our time back, the bosses simply demand we use that time to see more patients and submit more bills? Worse, what if institutions use AI not to support physicians but to reduce the need for us?


One person shared, "AI isn't inherently good or evil. It's just a tool." We mused about that together. Clinicians ought to be at the frontier, leading adoption of this tool, making sure it's being used for the good of patients and clinicians. For our specialty, palliative care, what then should our role be? Then someone said it: "Hospice and palliative medicine is truly the human side of medicine."


That felt true. But it begged the central question. What is it that a human offers, that AI can't?

The discussion was robust. Palliative care clinicians are empathetic communicators. On the other hand, AI models can mimic empathy decently already. What about “thinking” outside the box? AI is not good at improvising. It is trained on idealized cases, clean data, protocols that assume everything happens in sequence. But bedside medicine is messy, and palliative care teams can use intuition and adapt on-the-fly in ways that AI cannot.


Someone else commented: "Presence. That's what we offer. That's what AI can never replace." There was a quiet moment and some head nods.


That felt right to me. Sort of. AIs aren't physically present in the rooms with patients (yet), sitting quietly together in a clinic room, standing together in cramped hospital rooms, breathing the same air.


ree

Maybe there’s more to human presence, though? Something about the knowledge that another human being has also lived. Experienced joy and grief. Jealousy, procrastination, avoidance. Imperfection. Doesn't a human "presence" witness suffering, navigate hard decisions, in a way that is more fully felt by the patient?


I left the discussion feeling less alone but also needing to think about the question a whole lot more.

Why This Question Matters


Without clarity on what makes us irreplaceable, we can't:


Advocate effectively for how AI should be implemented. If we haven't articulated what must remain in human hands, we can't push back when administrators want to automate one more thing.


Recognize when efficiency gains come at the cost of what matters most. If the AI scribe changes how you listen to patients, is saving 22 minutes worth it?


Recognize when we're being asked to participate in our own displacement. If an AI tool truly augments your practice, that's one thing. But if it's training on your expertise while productivity expectations rise and staffing gets "right-sized"? That's something else entirely.


Lead this transition instead of being swept along by it. The physicians who will shape how AI gets used in medicine are the ones who've done this internal work first.

What's Really Driving AI in Healthcare?


As we navigate our own comfort level becoming “augmented” by AI products, it is prudent to pause and be skeptical about what's driving this surge. Venture capital has poured billions into healthcare AI companies. These aren't nonprofits—they're businesses that need to generate returns for investors. While some may genuinely care about physician wellness and patient outcomes, it would be naive to assume those are the primary drivers when there are shareholders expecting profits.


The economics matter because they shape incentives. When vendors pitch AI tools to health systems, business cases typically center on ROI, operational efficiency, productivity gains, and, yes, physician satisfaction. But it's worth asking: Are the features being built OPTIMIZED for physician well-being and patient outcomes? Or for demonstrable returns on investment? These aren't necessarily incompatible goals, but they're not automatically aligned either.


If the underlying business model depends on productivity extraction, that will influence which features get prioritized, how success gets measured, and what trade-offs seem acceptable.

A recent Lancet review puts it bluntly: "Health care in the USA: money has become the mission."

The authors document how market-based policies enabled firms "obligated to prioritise shareholders' interests" to gain control of vital clinical resources across healthcare—from hospitals to dialysis clinics to hospices. The pattern is consistent: when profit becomes preeminent, quality suffers, costs rise, and the focus shifts from patient care to shareholder returns.


AI in healthcare, born out of the same milieu, is arguably on track follow the same trajectory.

A Concerning Pattern


Early data shows ambient scribes can modestly reduce documentation time. But it's still very early, and we should be paying attention to what's happening as AI gets deployed in workplaces across other industries.


Research from Upwork found that while 96% of C-suite leaders expect AI to boost productivity, 77% of employees using AI say these tools have actually increased their workload. And 88% of the highest-performing AI users report significant burnout.


The efficiency gains aren't translating to workers going home earlier or having lighter workloads. Instead, many report being asked to do more work as a direct result of AI—and a World Economic Forum survey found that 40% of employers anticipate workforce reductions in areas where AI can automate tasks.


Healthcare isn't exempt from these economic dynamics. We've seen this with EHRs—supposed to give us more time with patients, but became a burnout driver optimized for billing, not care. The risk is that physicians effectively become trainers for systems that will be used to justify tighter staffing, higher patient volumes, and greater productivity expectations—all while we continue to shoulder the liability and the emotional labor that AI can't automate.


Furthermore, large language models DEPEND on human expertise without directly compensating those humans for their blood, sweat, and tears. My friend who works in publishing largely avoids using AI because she believes it not only steals from authors and artists, but over time degrades the quality of writing and art. As you begin to use AI in your practice, every time you take care of a patient, the AI is listening and learning from your 7-11 years of postgraduate medical training. Every time you manually correct its documentation, you’ll be giving free supervision.


The question isn't whether the rise of AI will be different than the EHR. The question is whether physicians will be prepared and vigilant, navigating AI adoption with eyes open.


ree

The Answer


In the aftermath of my palliative care discussion group, I realized something else about presence. The impact of that uniquely human presence in those hospital and clinic rooms is bidirectional. It doesn’t only touch patients. Being with patients influences how doctors think, feel, and act. We have proximity—we see patients daily, know their stories, share in their hopes and fears. We care about what happens to them.

In a healthcare system increasingly driven by profit, human clinicians may be the only stakeholders positioned to choose a different mission: patients.

Why are physicians uniquely positioned to resist profit extraction in healthcare?


An AI can't choose patient welfare over profit. A human doctor can. An AI will execute the algorithm. A human doctor can say "no, this is wrong."


You face consequences that create different incentives. You carry what happens emotionally, legally, professionally. Those stakes shape your decisions in ways shareholder value never will.


You can organize collectively. AI can't unionize. AI can't refuse. AI can't build professional coalitions. You can.


You can choose patients over profit, even at personal cost. You can push back on a denial. You can spend extra time. You can advocate. You have agency AI doesn't.


Professional norms exist independent of corporate goals. The Hippocratic tradition, medical ethics, your professional identity—these give you a separate allegiance that competes with profit.


These distinguishing characteristics are powerful. But that power requires moral clarity about what—and who—you're committed to.


ree

Individual Clarity, Collective Power


The institutional pace of AI implementation doesn't allow for this kind of reflection. But that doesn't mean the reflection isn't necessary. It just means we must create that space for ourselves.


For me, that's looked a lot like coaching—both seeking it out and now offering it to other physicians. Not because coaching has the answers about AI, but because it creates space for you to discover your own answers about how to “do good work in a bad system,” as the Lancet authors put it. And, more importantly, to develop the moral clarity and agency to act on those answers.


In a system where money has become the mission, maintaining your commitment to patients requires clarity about what you're fighting for and strength to sustain that fight without burning out.

Coaching can help you clarify your mission and build sustainable boundaries.

What parts of your work feel most human? What are you willing to automate and what must stay in human hands? How do you want to show up as AI changes your practice?


How do you maintain your commitment to patients in a system increasingly designed around profit?


And crucially: What do you offer that AI never will? And how do you protect that while still embracing change?


Getting clear on these questions matters for your own practice and well-being. But it also matters for something bigger.


Because individual physicians getting clear on what they're protecting is the foundation for collective action.


The physicians who will effectively advocate for thoughtful AI governance at their institutions are the ones who've articulated what they're fighting for. The ones who will push back against productivity creep are the ones who know their own boundaries. The ones who will organize to ensure AI augments rather than displaces physician work are the ones who've done their own internal work first.


Carve out some time and space to do whatever helps you develop that individual clarity, whether that's coaching, journaling, attending a discussion group, or something else. Because that clarity is what positions you to participate in—or lead—the collective efforts that will determine whether AI in medicine serves physicians and patients, or the profit machine.


If you're feeling that unsettled sense that something important is at stake—if you need space to think through what AI means for your practice and your identity as a physician—I'd be glad to talk.


A note on irony and agency: I used Claude (yes, an AI) to help research and draft this article. The irony isn't lost on me—but that's partly the point. AI can be a powerful tool when we maintain clarity about what we're using it for and what remains ours to decide. The tool helped me think faster, dig deeper, and organize my many thoughts. But it didn't tell me what to value. It didn’t care about the impact of what I wrote on doctors and the healthcare system. That was all me.

 


 
 
 

Comments


(862)395-9607

christie@reclaimphysiciancoaching.com

447 Broadway, 2nd Floor

#1604

New York, NY 10013

  • Instagram
  • Facebook
  • Linkedin

Views expressed here are those of Christie Mulholland and no other person or entity.

© 2025 by Reclaim Physician Coaching. Powered and secured by Wix 

 

bottom of page