
In this era of evidence-based medicine, you’d be right to expect hospitals and doctor’s offices to base their operational interventions on evidence, too. The reality is, they don’t.
Instead, most try to encourage evidence-based care by doing things that seem like good ideas: sending mailers, putting up posters, calling patients, adding alerts to electronic health records, engaging community health workers, and involving care managers. Most of the time it’s impossible to tell whether such interventions actually improve outcomes like obtaining preventive care, preventing people from being readmitted to the hospital, or receiving evidence-based treatment.
And here’s the real kicker: Hospitals also don’t know whether a slightly different mailer, poster, call, or message would work better. But without too much effort, such efforts can be thoughtfully and deliberately tweaked to have real payoffs.
For the past year, I and many of my colleagues at NYU Langone Health looked at more than half a dozen of the system-level practices that had been instituted by hospital administrators. As three of us describe in this week’s New England Journal of Medicine, we were able to test them — and in some cases improve them — using the fundamental tool of evidence-based medicine: randomized testing.
Many programs are implemented across the board because they seem like patently obvious ways to help patients get the best care. We don’t need a randomized controlled trial to prove that parachutes are effective, the thinking goes, so efforts are pursued for every patient, every nurse, or every doctor in a system. For instance, we call every patient after hospitalization to ask about recovery, medications, transportation, and follow-up. We believe this will reduce the chances that the patient will be readmitted and at the same time increase his or her satisfaction with our hospital. Most hospitals nowadays do something similar.
But many things can go wrong with that approach. First, wholesale implementation means that testing whether an intervention has improved outcomes means comparing results before the intervention to the results afterwards. That lends itself to all kinds of bias. What if the before period for post-discharge-calling was winter flu season and the after period is summer? Fewer people will be readmitted just because of seasonal changes — nothing to do with the intervention. What if there was another concurrent intervention or policy change? What if coding practices had changed? What if a new champion for the intervention had come on board?
Second, interventions that are applied wholesale but are then measured only in patients who participate — answer the phone, complete a survey, agree to be followed by a community health worker, and the like — are subject to substantial selection bias. Those who agree to participate are usually motivated and have better outcomes no matter what the intervention.
So when things get better, or don’t, it’s impossible to know whether to praise or blame the intervention or whether something else is responsible for the results. It turns out that there really aren’t many parachutes in health care delivery.
To test system-wide interventions, my colleagues and I implemented a rapid quality improvement study unit at NYU Langone Health that randomized systems-level interventions. Randomizing who gets an intervention, or the version of an intervention someone gets, reduces or eliminates the biases I described earlier.
In the first year, we conducted more than a dozen randomized interventions, including repeated testing of some interventions. Unlike clinical trials, which might take years to recruit just a few hundred patients, we were able to randomize operational changes in a matter of weeks. In our first test, of two different alert messages reminding nurses to order the influenza vaccine, it took just 12 weeks to generate 91,168 alerts to nurses. We are also able to study outcomes with minimal effort by retrieving data already captured in routine clinical care.
Thanks to this program, we now know that our hospital’s existing post-discharge telephone call program neither reduces readmissions nor improves patients’ ratings of their experience in the hospital. Are we going to fire all of the folks making those phone calls? Of course not. Instead, we can put them to more effective use. We could stop the one-time calls to patients at low risk of needing readmission and instead call the high-risk patients more often, or just call those who miss follow-up appointments. We could embed callers within primary care offices to be better connected to outpatient care rather than sequestering them in a central call center. Or we could alter the call script to focus more on issues that might be addressable. How will we know if those new ideas are effective? By testing them, of course.
People sometimes worry that they will be deprived of the most effective care through randomization. But the reality is that we are depriving all of our patients of the most effective care by not testing. It’s long past time that clinical operations catch up with clinical trials. They have led the way in helping identify the best treatments. It’s now time to rapidly, effectively, and efficiently test and improve the ways we deliver those best treatments.
In 2013, the Institute of Medicine (now the National Academy of Medicine) described an aspirational view of the health care system of the future as being a “learning health system.” It’s an inspiring definition: “A system in which science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process, patients and families active participants in all elements, and new knowledge captured as an integral by-product of the delivery experience.”
Moving health systems toward routine experimentation for learning is one way to get there.
Leora I. Horwitz, M.D., is director of the Center for Healthcare Innovation and Delivery Science at NYU Langone Medical Center, director of the Division of Healthcare Delivery Science at NYU School of Medicine, associate professor of population health and of medicine at NYU, and a practicing hospitalist.