Skip to Main Content

In the late 19th century, English polymath Sir Francis Galton noted that tall parents often had kids shorter than they were, while short parents often ended up with taller kids. He dubbed this regression to the mean — when something measured as extreme in a first instance is likely to be measured as less extreme later on.

That concept has important implications for health care policy today, one of which is that more health policymakers and health care researchers should use randomized evaluations to avoid problems of regression to the mean in estimating the effects of policies.

advertisement

In the U.S. health care system, the very highest-cost patients — known as super-utilizers — have been a focus of attention. That is because this 1% of patients account for almost 25% of all U.S. health care spending. A spate of high-profile studies have reported dramatic reductions in health care spending from programs designed to keep super-utilizers out of the hospital through various means, such as coordinating their outpatient care and coaching them on managing their conditions and medications.

This work raises an important question: Does hospital use decline because of the programs or, due to regression to the mean, because high-use patients are likely to use care less in the future?

Several colleagues and I set out to answer that question in partnership with the Camden Coalition of Healthcare Providers. It had created a comprehensive health care delivery model that aims to meet the medical and social services needs of very high-use patients who have had at least two hospital admissions in the last six months and two or more chronic conditions, among other criteria. The coalition has been widely heralded as a promising approach for reducing costs and improving health. Dr. Atul Gawande profiled the program in the New Yorker and the  coalition’s founder won a MacArthur “genius grant.”

advertisement

As a data-driven, learning organization, the coalition did not want to rest on its considerable laurels. To learn what its program was doing — and innovate based on the findings — it partnered with our research team to conduct a randomized controlled trial (RCT).

We randomly assigned patients who were eligible and who consented to participate to receive either the coalition’s program or status quo care. Randomization ensured that, at the start of the program, these two groups were similar. That way, the outcomes observed in the control group would tell us what would have happened over time in the intervention group in the absence of the program.

When we looked at patients in the intervention group, the results of the Camden Coalition’s program looked very encouraging: Participants in this group visited the hospital about 40% less in the six months after the intervention. But as we report in this week’s New England Journal of Medicine, we saw the same decline in hospital use among those in the control group. These results tell us that the improvements we saw in the intervention group were the result of regression to the mean, not the coalition’s program.

These results offer an important lesson: We wouldn’t have accurately measured the intervention’s impact if we hadn’t done a randomized controlled trial.

Since we learn more from RCTs than just the impact of an intervention on a single outcome, finding no effect doesn’t mean the end of the road. In the Camden Coalition trial, our results suggest that existing systems poorly serve the complex needs of the coalition’s patients. The Camden group (and others) are now exploring models involving more complete designs for providing care.

Regression to the mean isn’t unique to health care, but it is a particularly salient concern for studies of health care programs that are often (and understandably) implemented in response to extreme signals like advanced disease, high expenditures, or excessive prescribing. Fortunately, when randomized controlled trials are feasible and ethical, they provide a way to determine the effect of a program free from concerns about regression to the mean and other biases.

Concern about excessive prescribing presents another example where regression to the mean may lead to spurious findings but where an RCT can provide clear results. The Centers for Medicare and Medicaid Services recently partnered with researchers to conduct randomized evaluations of interventions designed to curb overprescribing of Seroquel, an antipsychotic drug. The researchers found that sending strongly worded letters that compared high prescribers’ behavior to their peers’ reduced overprescribing by 11%.

We can be confident that the letters are what caused the reduction in prescribing — rather than just regression to the mean (today’s extreme prescribers are less likely to be as extreme tomorrow) — because the trial included as a randomized control group prescribers who only received standard CMS outreach.

That study also shows how we can build on and learn from any finding, whether it is positive, negative, or null. The CMS overprescribing study built on a prior randomized controlled trial which found that the original peer comparison letters CMS had been regularly sending did not reduce prescribing of controlled substances. As a result, the researchers and CMS used psychological and other research to innovate and devise a different kind of letter to be sent to a different set of providers, which then did reduce prescribing behavior.

Randomized controlled trials can be used to study programs and policies across the health care industry. In my experience leading J-PAL North America’s U.S. Health Care Delivery Initiative, which funds and conducts randomized controlled trials of health care delivery interventions, RCTs have shed light on issues such as the effectiveness of clinical decision support alerts on ordering inappropriate medical imaging and nudges to improve consumers’ choices of health insurance. And there are ongoing RCTs of many more interventions, including food as medicine, home visits by nurses, and opioid buyback programs.

J-PAL North America is part of a growing movement of health systems, payers, providers, and more that are using randomized controlled trials to test and learn, whether through evaluations of whole programs or quick process improvements. Researchers at NYU Langone Health use rapid-cycle, randomized tests aimed at quickly evaluating simple process improvements to encourage best practices. This one medical center launched 10 trials in the first year alone and hopes to launch dozens more.

Finding solutions to address the complex medical and social needs of patients is a pressing issue. Yet all too often we don’t rigorously evaluate these solutions, which hurts patients we could be helping. Randomized clinical trials are essential tools for helping us learn, adapt, and move forward on innovative solutions that make peoples’ lives better.

Amy Finkelstein, Ph.D., is professor of economics at the Massachusetts Institute of Technology, and co-scientific director of J-PAL North America.

  • There should be basic standards for this kind of research. The NIH, FTC,CDC, and CMS have all failed to protect American patients from lies, misinformation, in marketing and distorting research results to please industry. People die from this kind of misinformation, when it is spread on social media or distorted by various media outlets for financial gain.

    The general public no longer believes fact based information, because our government has failed to regulate misinformation in healthcare. No agency is tracking the deceptive practices of healthcare corporations that exploit and defraud Medicare, while dumping certain patients. Patients are left to suffer and die, shipped hundreds of miles, because even the so called non profit hospitals check their financial status before admitting them. They can also weed out patients who might die to keep their rating up. Hospitals are dumping patients on nursing homes dot die after denying them care, but only of the insurance does not pay enough. Patients with good insurance are at risk too, provided with extra healthcare they do not need.

    Exploiting sick people and deliberately misleading the public is leading to a lot of deaths, yet our regulatory agencies refuse to act. Site like this have a responsibility to explain what industries and corporations are paying for health research, and identify marketing and deceptive industry funded research.

    • @Branwen
      “lies, misinformation, in marketing and distorting research results to please industry”
      What are you referring to here? In these cases, is there a way to determine whether the research is valid without discounting the funding source or authors?

Comments are closed.