Dr. Katherine Eisenberg didn't set out to revolutionize clinical decision support—she started as a family practice physician who wanted better tools during her busiest clinic days. Today, as Senior Medical Director of Dyna AI and chair of EBSCO Clinical Decisions’ Generative AI Advisory Council, she's building exactly those tools while helping shape responsible AI standards across healthcare.

In this conversation, Dr. Eisenberg discusses her journey from epidemiology to AI innovation, what it takes to build AI tools that are safe, equitable, and truly designed for the point of care, and why clinician voices matter more than ever as artificial intelligence (AI) reshapes medicine.

Practitioner Roots

You built a thriving family practice before moving into clinical informatics and AI innovation. What inspired you to focus on creating tools that support clinicians and improve care for even more patients — beyond those you see in person?
 

I've always been interested in using systems and data to improve care, starting from when I first discovered epidemiology as a discipline in college. Today, I love being able to have an impact both on the person in front of me and on those receiving care throughout the system.
 

You're known for perfecting your chocolate chip cookie recipe. Is building Dyna AI more like following your tried-and-true baking process, or does it require the same kind of thoughtful experimentation and fine-tuning that goes into creating your signature cookies?
 

That's a fun question! I actually think there are a lot of similarities. In baking, there are certain elements that are required, and measurements must be precise. Any experimentation needs to be carefully calibrated to maintain the integrity of the system. 

The same is true of technology — there is always room for innovation and creativity, but there are certain core needs that must be maintained.
 

AI Innovation Journey

Building Dyna AI has involved balancing speed, clinical precision, and continuous improvement. What have been the most important lessons from developing and validating the system to make sure it delivers high-quality, trustworthy answers for clinicians at the point of care?
 

I have always loved working in a collaborative environment, but even given that, the biggest lesson so far has been the degree of collaboration needed between colleagues across technology, business, and clinical fields to be successful. 

Today, we are still continuing to consider how we can deepen that collaboration. I think about that proverb, "if you want to go fast go alone, if you want to go far, go together," often. 

One great example is in our "guardrails” for Dyna AI. Our technology team has safeguards in place to avoid answering types of questions that our clinical team hasn't approved yet, and our clinical team advises on where the edges should be.
 

You chair EBSCO Clinical Decisions' Generative AI Advisory CouncilWhen doctors are skeptical about AI in clinical decision support, what's something you tell them that might change their perspective?
 

I would acknowledge that some degree of skepticism is absolutely appropriate when we're talking about applying a disruptive new technology to clinical practice. AI is already showing up in clinical workflows, whether we like it or not. 

The real risk is in using AI without understanding the ingredients and constraints that go into building them. More and more, I find myself inviting clinicians to engage in whatever way works for them, so we have more clinical voices helping shape the systems being built around AI.
 

You serve as a Work Group Lead for the Coalition for Health AI (CHAI), helping develop responsible AI standards and testing frameworks for clinical decision support tools. What's the biggest gap you've seen between what AI vendors claim their tools can do and what's actually needed to ensure those tools are safe, trustworthy, and ready for real-world clinical use?
 

The biggest gap I see is between how many AI tools are evaluated and how clinical care actually happens. A lot of evaluation metrics are developed for convenience or comparability, but they don't always reflect real-world conditions. For example, benchmarks based on medical licensing exams can be flashy, but there are real questions about whether they meaningfully represent day-to-day clinical practice. 

Ensuring safety and trustworthiness means testing tools in real clinical settings (and we're starting to see more of that emerge in the literature) as well as ongoing monitoring with continuous clinician involvement.
 

What was the moment when you first experimented with clinical AI and thought, "Wow, this could really be a game changer?"
 

There was a moment early on when, as a team, we asked ourselves whether applying AI in this context was the right path. We were very aware of the risks. But when we stood up the first prototype, what struck us was both how well it worked and how well it fit into our work of supplying the best available evidence at the point of care. 

It started to feel less like a novelty and more like something that could actually support clinicians in real moments of pressure. That was the point when it became clear there was real potential here, and things have only accelerated since.
 

AI Governance

The conversation about AI governance in healthcare is gaining momentum. Why is this such a critical topic right now, and what should leaders prioritize as they build AI into their workflows?
 

Many healthcare professionals are surprised to learn that there's very little formal oversight for how AI is currently being applied across healthcare. When we talk about AI governance, it can sound technocratic or abstract, but in reality, it's a pressing, practical need. 

In the absence of shared standards, individual health systems and vendors have been left to define their own rules over the past few years. That fragmentation creates real risk both for patient safety and for clinician trust, which is why thoughtful governance has become so urgent right now. I would advise systems to think about post-deployment monitoring in addition to their initial assessments, and on selecting vendors they want to partner with over time, who are open to feedback and who continue to innovate. 
 

Making Evidence Accessible & Equitable

You're passionate about health equity and have a PhD in epidemiology. How do you make sure Dyna AI doesn't just optimize for academic medical centers but also serves rural clinics and under-resourced communities?
 

This is where governance, standards, and evaluation become equity issues. Many large academic medical centers now have robust internal staffing and processes, including data scientists, informaticists, and clinical leaders who can critically assess AI tools before they're used in care. 

Under-resourced settings often don't have that infrastructure in place. That means they're relying heavily on vendors to do that diligence well, and to deliver tools that are just as safe, thoughtful, and clinically valid as those used in well-resourced environments. 

Our team takes that responsibility seriously because clinicians and patients in those settings deserve the same level of safety and rigor.
 

You recently shared reflections from HLTH. How can AI tools in healthcare be designed to ensure patients and clinicians from all communities benefit equally from better access to medical knowledge?
 

This requires an active, ongoing effort from teams developing AI tools to look for discrepancies that could affect patient care. That means looking at how tools perform across different populations so that underrepresented groups are adequately represented and evaluating for patterns that reinforce harmful stereotypes. 

For example, if the first suggested diagnosis for a man with palpitations is a cardiac arrhythmia, but for a woman with the same symptoms it's anxiety, that's an important clinical issue that needs to be addressed.
 

Discovery Challenge

How does your clinical practice influence the information tools you build?
 

I love this question because I've always had a simple goal in mind when advising our team: "Would I want to use this on a busy clinic day?" 

While it's essential to gather broad clinician feedback when developing tools for practice, I've also kept my own clinical wish list as a north star. That wish list included fast, clear answers that reflect clinical thinking, reduce cognitive load rather than adding to it, and anticipate clinicians' needs. Those priorities and those of my colleagues continue to influence how we designed Dyna AI's search experience.
 

Many medical students and clinicians are turning to Google, ChatGPT, and Perplexity for clinical answers during busy moments at the point of care. What's the most important caution or warning you would offer about relying on these tools for patient care decisions?
 

I completely understand why people reach for these tools. We're increasingly familiar with them in our personal lives, and they can feel fast and convenient. The caveat is that general-purpose AI tools aren't built with clinical accountability in mind. An enormous amount of nuance and expertise goes into medical decision-making that these tools aren't designed to represent. They don't reliably surface uncertainty or the limits of the medical evidence, and sourcing isn't always transparent. 

They can be useful for brainstorming or a very general introduction to a topic, and to prepare for medical appointments, but a certain amount of caution is still needed, and collaboration with the healthcare team to put the information in context.  In the past few weeks, we’ve also seen announcements by OpenAI and others more formally getting into the healthcare market. I think there is truly some potential benefit there, but I would also encourage people to be cautious about uploading personal medical data to these tools. 
 

We've unpacked a lot, but what's one thing that hasn't been asked that you want readers to know?
 

I'd like to speak directly to other physicians here. 

This is an important moment for us to engage. It doesn't mean we all need to become technologists, but it does mean taking the time to educate ourselves enough to ask questions and recognize what responsible AI use looks like. AI is already being built and deployed right now. 

Our input can meaningfully shape what that looks like, and given the degree of disruption generative AI has brought and will continue to bring, this is a unique moment to have our voices heard rather than sidelined.
 

Looking Forward: Building Healthcare AI Worth Trusting

Responsible clinical AI must be built by people who understand the reality of patient care, not just the technology behind it.

Kate's journey from epidemiology student to family physician to Dyna AI leader exemplifies this principle. Her work with Dyna AI and the Coalition for Health AI demonstrates what's possible when clinical expertise and equity considerations guide innovation from the start. 

She remains in practice because that perspective is non-negotiable: the tools she builds must work during real clinic days, serve all communities equally, and earn the trust of clinicians who will rely on them at the point of care.