We are awash in articles reporting new study results. Whether it’s a potential cure for an illness, daunting statistics about health care outcomes, or the latest take on what makes a healthy diet, we’re receiving more information than we’ve ever had before nudging us toward making decisions about our health.
But are we getting accurate information?
Dr. John Ioannidis, a statistician and professor of medicine at Stanford, recently joined three dozen health researchers in a Stanford lecture hall to explain why most research findings are false. Of the most widely cited health research studies of the last decade, he found that only a minority could be replicated, and at least 1 in 6 had actually been contradicted by later studies.
Few of the cases Ioannidis cited were about outright fraud. Instead, most research claiming to have an effect among specific groups of people had used poor statistical methods that failed to support their conclusions. More often than not, researchers had simply sliced and diced their data until the results seemed significant rather than null. But still, Ioannidis pointed out that most claims in the health research field today are simply wrong.
This mismatch is leading to the “replication crisis” — an alarming trend of studies that can’t be duplicated by other parties. While it isn’t limited to health research, that’s where it potentially poses the biggest problem.
Replicability is one of the key tenets of scientific research, and for good reason. If a study shows that a new drug, for example, treats a specific condition in a specific population, the same result should emerge when a different team conducts the same test. Replicability validates results, assuring that the outcome is real and not the result of some unforeseen variable or the play of chance. It ensures that the information we think we’re getting is, in fact, accurate.
The ultimate issue of the replication crisis is simple: We can’t make informed decisions without the correct information, and we’re getting a lot of incorrect information.
The replication crisis is happening because of misaligned incentives: Rather than simply sharing the results of research with the world, many players have some motive pushing them toward a specific result. Two pharmaceutical replication studies looking at in-house and external validation revealed that only a minority of therapeutic claims about cancer, cardiovascular, and major preclinical drug targets could be reproduced at independent laboratories.
A democratic approach to data is emerging, with the perspective that data be carefully but widely shared among researchers. This has parallels in other fields, such as open-source code on GitHub, Code Ocean, and other online repositories. Although data sharing raises a privacy issue for people who fear disclosing personal health information to an anonymous virtual horde, data hoarding leads to a proliferation of unverifiable health claims by preventing others from seeing the data, replicating the analysis, and verifying a claim.
From my perspective, there are three actions that researchers and general consumers of health information need to demand of companies and health care researchers to effect change and stop the replication crisis:
Democratize data. While individual health data are and should be private, datasets can, with appropriate consent, be de-identified and shared while ensuring appropriate informed consent and protecting individual privacy. The demand from participants in clinical research studies to make their data available in this way has generated surprising revelations about the results of major drug trials and increased the capacity to make better decisions about health. Data sharing, code sharing, and replication repositories are typically free to use.
Embrace the null. Null results are much more likely to be true — and are more common — than “significant” results. The excessive focus on publishing positive findings is at odds with the reality of health: that most things we do to improve our health probably don’t work and that it’s useful to know when they don’t. Researchers should focus on how confident they are about their results rather than on whether their results should simply be labeled “significant” or not.
Be patient. The 19th-century physician William Osler once said that, “The young physician starts life with 20 drugs for each disease, and the old physician ends life with one drug for 20 diseases.” New revelations take time to replicate, and new interventions — particularly new drugs — have safety issues that may become apparent only years after they come on the market. Older therapies may be less effective but may also be most reliably understood. If we demand that new therapies stand the test of time, we offer ourselves the opportunity to be safer as we balance innovation with healthy skepticism.
That’s why I and my team at Collective Health take a rigorous approach to our research process to be sure we’re making informed decisions to improve the health of our members. We’ve established a pre-specified study design that can be applied to most of the research we do (on digital apps, health care utilization, and cost). This design helps us determine our outcome metrics, statistical methods, and subgroup analyses ahead of time, leading us away from “slicing and dicing” the data to show a positive result when a first result misses the threshold for statistical significance.
We are also sharing our statistical research code. In an industry dominated by intellectual property concerns, we’ve taken the stance that the best new health claims should be replicable. Most health claims are made based on standard data analyses (or even increasingly standard machine-learning analyses) that are applied to a new dataset or a new question being applied to an existing dataset. By sharing our research code, we aim to achieve a level of transparency that elevates our status in the industry while not deflating our ability to profit from discovery.
As research proliferates at a breakneck pace, critical examination of it isn’t, fueling the replication crisis. All of us, consumers and researchers alike, must demand replicable research as a standard rather than an exception.
Sanjay Basu, M.D., is a primary care physician and epidemiologist and director of research and analytics at Collective Health and a faculty member of Harvard Medical School and Imperial College London.
Hi Dr. Basu,
Thank you for this timely and extremely important reminder to researchers to report what the results show. The null hypothesis exists for a reason.
I’ve published one peer-reviewed article upon which my MS was based. It was original research and was led by my mentor, a world renown researcher. He was a truist to research and the scientific method. If your research doesn’t show what you think it will, that’s what you report. Yes, more funding comes from the research that is done. But he never had a problem with funding, even when his results were at odds with the highly respected companies that made products for our industry. He reported what he found. And many times the company chemists would admit that they knew of the shortcomings of the product. To me, that was and still is the bottom line in research.
Keep up the great work!!
Johnny Johnson, Jr., DMD, MS
President, American Fluoridation Society
Comments are closed.