Science headlines can be notoriously flip-floppy: One week something causes cancer, another week it protects against it. A cholesterol drug works. Oops, no it doesn’t. Well, maybe it does, a bit.
To help prevent whiplash, researchers developed the meta-analysis — a means of combining the data of previous studies into a larger pool to get a better sense of what’s happening. Meta-analyses — and their close cousins, systematic reviews, which take the same approach but stop short of statistical analysis — are considered the pinnacle of scientific evidence, the dispassionate and orderly adults in a roomful of clamoring children.
But now one of the leading critics of the quality of biomedical research — who himself has published a number of meta-analyses — says he believes such studies have a problem of “epidemic proportions.”
In a new paper, John Ioannidis, of Stanford University, argues that scientists are being deluged with a “massive production of unnecessary, misleading, and conflicted systematic reviews and meta-analyses.” Rather than present objective evidence, these articles are afflicted with the very illness — assumptions, biases, and wishful thinking — that they ostensibly try to filter out, he says.
The surge in reviews — a 2,500 percent increase since 1991, according to Ioannidis’s PubMed analysis — is precisely because of their strength. People pay attention to their conclusions. As a result, “Many scientists now want to do them, leading journals want to publish them, and sponsors and other conflicted stakeholders want to exploit them to promote their products, beliefs, and agendas,” Ioannidis writes in a Q&A about the paper.
(We note, hanging our heads in a moment of self-reflection, that journalists are responsible for some of this. In an attempt to avoid potentially misleading readers, we often value meta-analyses and reviews above individual study results.)
To be sure, scrupulous reviews are valuable, Ioannidis says. And his vision is indeed of a field of science where meta-analyses and reviews are the “main type of primary research.”
“The problem is that most of them are not carefully done and/or are done with predetermined agendas on what to find and report,” he writes, a major blown chance for scientists to create bedrock in the literature of truly solid results.
To fix that, says Ioannidis, we should flip our thinking. “Any new study should start from a systematic review of what we already know (to even justify the need for its conduct and its design) and should end with an updating of what we know after this new study, again in the systematic review framework,” he writes. That could give rise to living documents that change as new findings augment or reverse old conclusions.
Such “living documents” — suggested in an editorial that accompanies Ioannidis’s paper — would be one solution. Another could be rethinking incentives, so that a high-quality meta-analysis or systematic review would be worth more at promotion time than a bunch of weak ones. “A scientific culture that values methodological rigor, research transparency, and data sharing over rampant productivity will hopefully yield systematic reviews and meta-analyses that are necessary and nonduplicative and that do not compromise on quality,” write Matthew Page and David Moher in the editorial.
Other scientists are raising similar concerns about meta-analyses. The scientist-blogger Hilda Bastian, for example, has pointed out that, among other flaws, meta-analyses are prone to positive publication bias because negative findings rarely find their way into print. They’re also a “snapshot in time” that can be — and often are — obsolete by the time they’re published.
The solution to this problem is not to dismiss meta-analyses. Rather, as Bastian says, it’s to make more data available by publishing all studies, positive or negative.