By Dr. Temitayo Bewaji
So here’s the thing about GenAI in hospitals — it doesn’t need tinfoil hats. But it does need cognitive PPE. Boundaries. Supervision. And yeah, training wheels.
The Real Risk Isn’t What You Think
The conversation around generative AI in medicine keeps fixating on the wrong dangers. Yes, hallucinations are concerning. But the more insidious risks? They’re subtler.
Confidence bias. Judgment drift. And that seductive algorithmic voice that leans in and whispers, “Your instinct is spot on, Doctor. No need to second-guess yourself.”
These aren’t dramatic failures — code blues or missed diagnoses that make the evening news. They’re the quiet erosion of clinical reasoning. The kind that happens when convenience replaces cognition, when a well-formatted answer feels too polished to question.
Why GenAI Isn’t Just “Google for Doctors”
We keep comparing GenAI to search engines, but that comparison fundamentally misses the point.
Google makes you work. You sift through sources, evaluate credibility, weigh conflicting information, and choose which link to trust. The cognitive labor stays with you.
GenAI hands you a final answer. It synthesizes, concludes, and presents — with confidence. It says “Trust me” while removing enormous amounts of cognitive friction.
Think of it as ultra-processed information: tasty, convenient, and dangerously easy to overconsume.
So Is GenAI Too Risky for Medicine?
Not at all.
We already work with high-risk, high-benefit tools every single day. Scalpels. Narcotics. Paralytics. Defibrillators. Each one can save a life or end one, depending on how it’s used.
The issue isn’t the tool. It’s the system around it.
We don’t hand a first-year resident a scalpel without training. We don’t give them access to controlled substances without supervision. So why would we hand them a language model without the same rigorous framework?
Five Cognitive Countermeasures for Safe GenAI Integration
Here’s how I believe we can deploy GenAI responsibly — with the same discipline we apply to every other powerful clinical tool.
1. Educate Clinicians on How It Fails
GenAI can’t just be an IT rollout. It needs to be embedded in medical education.
What does that mean practically? Teach clinicians not just how to use these tools, but when they fail. Create safe sandbox environments for experimentation before clinical exposure. Build institutional knowledge about model limitations, biases, and blind spots.
2. Set Clear Boundaries
GenAI should assist, never replace clinical judgment.
Good use cases:
- Note drafting and documentation support
- Patient education materials
- Literature summarization
- Differential diagnosis hypothesis generation
Problematic use cases:
- Shortcutting complex clinical reasoning
- Making diagnostic decisions without verification
- Replacing specialist consultation
Think of it as a “hypothesis generator for you to accept or reject,” not a “diagnosis decider.”
3. Structure Your Prompts Strategically
Vague queries lead to vague answers. And in medicine, vague can be dangerous.
Instead of asking: “What could this be?”
Try prompts that encourage critical thinking:
- “What would argue against this diagnosis?”
- “What alternative explanations exist for these symptoms?”
- “What red flags am I missing?”
Build prompts that challenge assumptions rather than confirm them.
4. Demand Citations
If the model can’t show its receipts, assume it hallucinated.
Embedded references are helpful, but they need independent verification. No source equals no trust. Cross-reference AI-generated information with established clinical guidelines.
This isn’t paranoia. It’s basic information hygiene.
5. Monitor, Audit, and Learn
Models drift. Behavior changes. New patterns emerge.
So what do we do? Consider implementing usage logging and periodic reviews. Create “GenAI Morbidity & Mortality” rounds to discuss near-misses. Maintain ongoing training as models update. Establish institutional governance committees.
We need feedback loops that treat AI deployment as an ongoing process, not a one-time implementation.
The Intern Analogy
Here’s what I keep coming back to: When something sounds smart but is occasionally and confidently wrong — that’s not a reason to panic.
That’s just an intern.
And what do we do with interns? We don’t ban them from the hospital. We train them. We supervise them. We build structures and processes around them. We give them graduated responsibility as they prove themselves.
It’s the same principle that applies to any drug that can alleviate pain but also stop respiration, or any procedure that can save a life or end one.
Building Systems, Not Just Using Tools
In medicine, we don’t just trust a tool — we build systems around it.
That’s the difference between reckless adoption and responsible innovation. GenAI has enormous potential to reduce documentation burden, improve patient communication, and support clinical decision-making. But only if we approach it with the same rigor we apply to every other intervention.
The future of AI in healthcare isn’t about whether we use these tools. It’s about whether we use them wisely.
If you still want the tinfoil hat? Make sure it’s sterile.
About Dr. Temitayo Bewaji
Dr. Temitayo Bewaji is a physician passionate about the intersection of technology, innovation, and patient-centered care. Through Bewaji Health, he explores how emerging technologies can enhance clinical practice while maintaining the highest standards of safety and judgment.
Have thoughts on GenAI in clinical practice? Dr. Bewaji welcomes dialogue with fellow clinicians, technologists, and healthcare leaders navigating these transformative times.
Connect with Dr. Bewaji to continue the conversation about responsible AI integration in healthcare.


