Flip-flops on health advice have become the norm rather than the exception. We’ve been back and forth about butter several times — is it bad or perfectly fine? Hormone replacement therapy was often prescribed to women to protect their hearts until more rigorous randomized trials showed that it slightly increases the risk of heart disease and breast cancer.
While much of the confusion is due to sensational headlines and entertainment-focused media reports, the major culprit in contradictory science is lousy research design. Bad science is responsible for more than half of untrustworthy scientific findings, which are at the root of flip-flopping recommendations.
The risk of bad science goes beyond confusing nutrition advice or improper use of medications. It extends to health care policies that affect the entire nation. Two important national policies — accountable care organizations and pay for performance — were based on now-discredited science. Despite the fact that both are failures, they appear to be favored by President-elect Donald Trump.
Accountable care isn’t really accountable
The federal government promoted Pioneer Accountable Care Organizations (ACOs) to reduce Medicare costs and promote quality of care. ACOs are supposed to work by financially rewarding hospitals and physicians for delivering better care or reducing costs, and by penalizing those that fail to do so.
Unfortunately, the research on ACOs, which had appeared in leading medical journals, relied on data from a hand-picked group of hospitals and physicians known for superior care. The control group included less-motivated hospitals which had less experience with patient care management. This biased the study in favor of ACOs, a fact not fully reported by the researchers. A key study ultimately found only minuscule savings of 1 percent, which actually represented a net loss given the high cost of developing ACO programs.
The study’s flaws meant that its conclusions could not be extended to other hospitals and medical systems — but that is exactly what the federal government did in expanding the ACO program nationwide. Not surprisingly, almost half of these Pioneer ACOs dropped out within a year. Last year, Dartmouth-Hitchcock, the eminent teaching hospital associated with Dartmouth College, coiners of the term ACO, quit the program, citing inequitable payments required by the Centers for Medicare and Medicaid Services for not meeting targets despite generating savings.
More disconcerting, a recent study in the New England Journal of Medicine reports that new ACOs generated zero savings. (Again, this represents a significant loss after accounting for the cost of implementing them.) But the ineffective and costly incentive program continues, with indications that Trump may endorse it.
Flaws in pay for performance
Pay for performance, another policy intended to improve quality, sounds intuitive enough: pay doctors based on the quality of care they deliver, not simply for the number of services they provide. Early studies in the New England Journal of Medicine and the British Journal of General Practice appeared promising, claiming that such programs improved patient care and health.
But that research failed in a crucial way: It didn’t account for improvements in health care and health trends that were already occurring, independent of pay for performance. These improvements were likely due to better technology and other ongoing advances in medicine. Yet the studies led to the creation of pay for performance programs all over the world, including in the United States.
Subsequent, stronger studies of pay for performance that controlled for biases consistently overturned the early false hopes. One worldwide systematic review found not only that there was little evidence supporting the effects of pay for performance on quality, but also that such programs sometimes even discouraged doctors from treating the sickest patients. If doctors cherry-pick healthier patients, they can make their outcomes look better and get more incentive payments.
Nevertheless, pay for performance is a big part of Medicaid, Medicare, and many private health plans. Medicare expects to pay $177 billion for similar incentive programs this year, and the Trump administration is likely to favor pay for performance, despite the evidence that it doesn’t work.
How can we do better?
The scientific process is an ongoing dialogue. We progress only by testing, retesting, and building on previous work. That is bound to lead to contradictory results, which means we must be extra cautious when translating the results of research into patient recommendations or public policy.
The impression that scientists are constantly changing their minds is abetted not just by sensational presentations in the media, but also because medical journals seek headlines in prestigious news outlets.
All of us — scientists, journalists, policymakers, and clinicians — need to be more thoughtful about research. Medical journals need to insist on established research design standards and refuse the weakest studies before they raise false hopes, waste resources, or harm patients. Journals should reject studies based simply on correlations or before-and-after comparisons without a control.
Scientists and readers of research, including journalists, need to better understand how to interpret research design. (Medical schools can help on this front by including reliable and trustworthy research designs in their curricula, though currently this is very rare.) Government agencies should conduct rigorous pilot tests of programs before large-scale implementation — the opposite of what happened with ACOs.
It is essential that policy be based on rigorous science. Weak research may produce enticing media headlines and alluring policy proposals. But it ultimately leads to wasted money, unhappy doctors, and policies that may damage — rather than improve — the health of Americans and US health care.
Stephen Soumerai, ScD, is professor of population medicine and teaches research methods at Harvard Medical School and the Harvard Pilgrim Health Care Institute. Ross Koppel, PhD, teaches research methods and statistics in the sociology department at the University of Pennsylvania and is a senior fellow at the Leonard Davis Institute of Health Economics.