By Temitayo Bewaji
The conference room was beautiful. Floor-to-ceiling windows. Mahogany table. Leather chairs that probably cost more than my first car.
But the hospital administrator sitting across from me looked like she hadn’t slept in weeks.
“We spent $250,000 on an AI system,” she said, her voice barely above a whisper. “The vendor promised it would make life so much easier. The technology works perfectly. But six months later, our clinicians still won’t touch it.”
She paused, and I watched her try to hold it together.
“They’re just resistant to change. They don’t trust technology. I don’t know what else to do.”
I’ve had this exact conversation seventeen times in the past year.
Different hospitals. Different AI systems. Different leaders. But always the same confused, exhausted look when they tell me about their expensive, beautiful technology that nobody uses.
And here’s what I’ve learned: They’re all solving the wrong problem.
The Enemy Isn’t Who You Think It Is
Let me be blunt: Resistance isn’t your problem.
I know that’s what it looks like. I know your clinicians aren’t using the system. I know they’re making excuses, finding workarounds, and going back to the old ways of doing things.
But resistance is just a symptom.
The real killer is silence.
Here’s what actually happens when you roll out AI without building trust first:
You send an email announcing the new system. Maybe you hold a training session. You talk about features, workflows, and efficiency gains. Everyone nods. Some people even seem excited.
Then you flip the switch.
And… nothing.
Or worse, something does happen—but it’s not what you expected.
What They’re Really Thinking (But Not Saying)
In the silence, in that gap between your announcement and their adoption, your team is having conversations you’re not part of.
In the break room. In the parking lot. In whispered exchanges between patient visits.
And those conversations sound like this:
“Do you really trust this thing to catch what we’d catch?”
“How long before they decide they don’t need as many of us?”
“Remember that EMR system they were so excited about? We spent months learning it, and they replaced it a year later.”
“Great. One more thing to slow me down when I’m already seeing patients every 12 minutes.”
These aren’t irrational fears. These are the stories people tell themselves when you introduce change without building trust first.
And here’s the thing that will keep you up at night: They might be right.
The Moment I Realized Everything We Know About AI Adoption Is Wrong
I’ll never forget the day a physician changed my entire perspective on this.
Her name was Dr. Martinez (I’m changing details to protect her privacy, but this conversation was real).
Leadership had labeled her a “problem.” She was vocal in her skepticism. She questioned everything about the new AI diagnostic tool. She was, in their words, “blocking progress.”
So they asked me to talk to her.
I expected defensiveness. Maybe even hostility.
What I got instead was something that haunted me for weeks.
“Do you know what happened last month?” she asked me.
I didn’t.
“A patient came in with symptoms the AI flagged as low-risk. Standard protocol. But something felt off to me. I couldn’t put my finger on it—just thirty years of pattern recognition telling me to look deeper.”
She paused.
“It was a pulmonary embolism. If I’d trusted the system and sent her home, she probably would have died that night.”
She wasn’t resistant to technology. She was terrified of being pushed to override her judgment.
And nobody had ever asked her about it.
The Text Message That Changed Everything
Three months after our conversation, Dr. Martinez sent me a text message I still have saved:
“I thought this AI thing would slow me down. I was wrong. It saved me 40 minutes today, and I actually got to talk to my patients.”
What changed?
The technology didn’t. She’d had access to the same system for months.
But something else did.
The Five Things Nobody Tells You About AI Adoption (But Everyone Should Know)
After watching implementations succeed and fail across dozens of healthcare settings, here’s what actually makes the difference:
1. Stop Selling Features. Start Solving Pain.
Your team doesn’t care that your AI uses machine learning and natural language processing.
They care that they spent two hours yesterday responding to patient messages that could have been triaged automatically.
They care that they’re documenting visits until 9 PM because the system is clunky.
They care that they’re burning out and nobody seems to notice.
Start there. Name the pain. Show how AI addresses it. Make it about them, not about innovation.
2. Give Them the Override Button (And Mean It)
Here’s a secret: The most trusted AI systems aren’t the most accurate ones.
They’re the ones that make clinicians feel empowered, not replaced.
Every AI decision should come with a big, obvious button that says: “I’m the expert here, and I disagree.”
And when someone clicks it, don’t send passive-aggressive alerts or create friction. Thank them. Learn from them. Show them their judgment still matters.
Because it does.
3. Prove It Works on the Annoying Stuff First
Don’t start with life-or-death diagnostic decisions.
Start with the tasks that make everyone groan:
- Prior authorization paperwork
- Routine message triage
- Appointment scheduling conflicts
- Documentation templates
- Patient education materials
Win on the annoying stuff, and you’ll earn credibility for the important stuff.
4. Tell the Truth. All of It.
“Will this replace jobs?”
“What happens if it makes a mistake?”
“Why should we trust this when the last system failed?”
These are the questions your team is asking in private.
Answer them in public. Honestly. Even when the answers are uncomfortable.
Silence breeds conspiracy theories. Truth builds trust.
5. Create a Feedback Loop That Actually Loops
Here’s how most AI implementations handle feedback:
“Submit your concerns to this portal. We’ll review them quarterly.”
Here’s how successful ones do it:
“Tell us what’s not working. We’ll fix it this week. Here’s what we changed based on last week’s feedback.”
The speed of response matters more than you think.
The Question That Separates Success from Failure
Before you implement AI—before you sign the contract, before you announce it to staff, before you do anything—answer this question:
“What will I do when my best clinician tells me the AI is wrong?”
If your answer is:
- “We’ll review the algorithm”
- “We’ll look at the data”
- “We need to trust the system”
You’re not ready.
The right answer is: “I’ll listen. I’ll investigate. And if they’re right, we’ll fix it.”
Because here’s what I’ve learned: The hospitals where AI succeeds aren’t the ones with the best technology.
They’re the ones where clinical judgment is still sacred.
What Happened to Dr. Martinez
Six months after that first conversation, Dr. Martinez wasn’t just using the AI system.
She was training other physicians on it.
Not because someone made her. Not because she was incentivized. But because she’d moved through a journey that most implementations skip entirely:
Fear → Understanding → Trust → Advocacy
She understood what the AI could do well (routine triage, pattern recognition in standard cases).
She understood what it couldn’t do (catch the subtle, intuitive red flags that come from experience).
She understood where she was still essential (the judgment calls that save lives).
And that understanding transformed everything.
The Real Cost of Getting This Wrong
That $250,000 the hospital spent on unused technology?
That’s not even the real cost.
The real cost is:
- Clinicians who are more burned out because they’re fighting one more system
- Patients who wait longer because workflows are disrupted
- Leaders who become cynical about innovation
- Teams who learn to smile and nod through the next rollout, knowing it won’t matter
The real cost is trust. And once you lose it, it’s almost impossible to get back.
Here’s What You Can Do Today
You don’t need a complete overhaul. You just need to start asking different questions.
Before your next AI implementation, try this:
- Schedule a listening session. Not a training. Not a presentation. A conversation where you only listen.
- Ask: “What are you afraid of?” Then sit in the uncomfortable silence and actually hear the answer.
- Identify one painful task. Find the thing everyone hates doing. Start there.
- Create a 48-hour feedback loop. When someone reports a problem, respond within two days. Every time.
- Share a failure story. Tell your team about an AI implementation that went wrong somewhere else, and what you learned from it.
These aren’t big, expensive initiatives.
They’re acts of respect.
And respect is what builds trust.
The Conversation We Need to Have
I started this article with a story about a hospital administrator who spent $250,000 on a system nobody uses.
Want to know how it ended?
They didn’t fix the technology. They fixed the conversation.
They brought clinicians into the design process. Addressed their fears publicly and built quick wins on mundane tasks. They created real feedback loops.
Three months later, adoption went from 12% to 78%.
Same system. Same technology. Different approach.
Because here’s the truth most people don’t want to hear:
Your AI implementation isn’t failing because your technology is bad. It’s failing because you’re treating trust like a checkbox instead of a process.
Your Move
I’ll leave you with this:
Somewhere in your organization right now, someone knows exactly why your AI system isn’t working.
Maybe it’s a nurse who’s found a workaround because the workflow doesn’t make sense.
Maybe it’s a physician who’s scared to speak up because they don’t want to seem “anti-innovation.”
Maybe it’s a front-desk staff member who’s been clicking through screens for six months, pretending it’s helping.
What if you asked them?
Not in a survey. Not in a focus group. But in a real conversation where you’re genuinely curious about their experience.
That’s where trust starts.
And trust is where innovation actually happens.
Temitayo Bewaji is the founder of Bewaji Healthcare Solutions. I’ve spent my career at the intersection of healthcare innovation and human psychology, helping organizations implement technology in ways that enhance rather than erode trust. If your AI strategy is struggling, let’s talk about what’s really holding it back.
Ready to build an AI strategy your team will actually trust? Let’s start a conversation.
#HealthcareAI #HealthcareInnovation #DigitalTransformation #HealthTech #ClinicalTrust #HealthcareLeadership #AIinHealthcare #ChangeManagement

