With all that’s happening in the world, it may be hard to remember that there have been other stressful, anxiety-provoking times in recent history. For those a bit younger, let’s get in the DeLorean (although I guess you may not know that reference either) and go back to 1983, a tense time in the Cold War when Americans feared the threat of global nuclear war. That was the year Ronald Reagan famously called the Soviet Union the “Evil Empire” and Soviet General Secretary Yuri Andropov proclaimed that the two countries were moving toward “the dangerous ‘red line’” of a nuclear armageddon.

So imagine you’re Stanislav Petrov. It’s September 26, 1983, and you’re sitting at your desk in a Soviet control center monitoring the air defense early warning system. Suddenly, the alarm sounds and the screen in front of you signals that there’s an incoming American intercontinental ballistic missile and orders you to launch a counterstrike on the United States. Almost immediately after that, the system detects another… and another… and another… and another… five incoming missiles in total. The computer system indicates the warning has the highest degree of reliability possible, and you have minutes to respond to the attack. 

But Petrov felt something just wasn’t right, and he decided not to respond. It turned out he was right: the warning was triggered by sunlight bouncing off high-altitude clouds that was misinterpreted by Soviet satellites. 

Stanislav Petrov was what we now call the human in the loop.

But not just any human. Stanislav Petrov was the only one on his air defense team who had received civilian training, and it turned out that was important. Thirty years later, he noted, “My colleagues were all professional soldiers, they were taught to give and obey orders." In other words, had they been on duty that morning, there would have been a different outcome. As he noted, “…they were lucky it was me on shift that night.” 

One solution to ensuring transparency, accountability and reproducibility is to keep a human in the loop to preserve human oversight. This is crucial for high-risk areas such as clinical decision support, but as with most dilemmas facing AI, how the human is implemented in the loop matters.

For trustworthy AI, keep the human in the loop. Nat Med 31, 3207 (2025).

Although you may never have heard of Petrov, it’s hard not to believe this was the most significant decision made by a single human being in the last fifty years, likely affecting literally hundreds of millions of lives. While every recommendation a clinician makes doesn’t have that same global impact, each clinical decision affects a human life. And in totality, clinical decisions affect literally millions of lives across the globe every day. A study by David Newman-Toker, a colleague of mine from Johns Hopkins, reported that nearly 800,000 people a year become permanently disabled or die as a result of diagnostic error in the United States alone. Clinical decisions matter. Having those decisions supported by best evidence at the point of care matters. And having human oversight of clinical decision support matters. A recent article in Nature noted, “One solution to ensuring transparency, accountability and reproducibility is to keep a human in the loop to preserve human oversight. This is crucial for high-risk areas such as clinical decision support, but as with most dilemmas facing AI, how the human is implemented in the loop matters.” 

But it’s not just how the human is implemented that matters, it’s who that human is. Just as Petrov felt his civilian training was critical that day, the training of the human in the loop in AI-enabled clinical decision support is also critical. It’s not only important to have a subject matter expert involved in writing content in a particular area, it’s important to have an expert in evidence-based methodology involved as well. It’s important to have experts who critically appraise and feel comfortable questioning research findings even when they may appear highly reliable to others because they’re published in leading medical journals. People sometimes ask why someone who is not an expert in a particular subject is listed as one of the authors on that subject in DynaMed. That’s because they are an expert in evidence-based methodology. Not just any Tom, Dick, or Stanislav. 

Roy Ziegelstein, MD, MACP
Editor in Chief and CMO, DynaMed
Read more about Dr. Z
Hear Dr. Z on his new podcast Coeur de Roy: The Heart of Clinical Care on Apple, Spotify, or search for it wherever you get your podcasts
Connect with Dr. Z on LinkedIn