Peer review is everybody’s favorite punching bag in science these days, and for good reason: As we and others have written, it’s secretive, susceptible to bias, and often appears to fail at keeping scientific publishing rigorous and honest.
But peer review is essential for the smooth operation of the scientific publishing apparatus. Without the imprimatur, however imperfect, of independent scholars, research papers would all in effect be titled “Trust us …”
The problem is, we have scant research into how peer review functions at its job of keeping out bad science. Journals don’t devote sufficient attention to studying the quality of their peer review systems, nor do they make those data available to outside scholars.
That could be changing.
A pair of scholars is calling for a modest moonshot to improve the system, which they call (rightly, we think) a “black box.” Writing in this week’s issue of Science, Carole Lee, a philosopher at the University of Washington, and David Moher, of the Ottawa Hospital in Ontario, Canada, argue that publishers should become much more transparent about their peer review practices.
“Though the vast majority of journals endorse peer review as an approach to ensure trust in the literature, few make their peer review data available to evaluate effectiveness toward achieving concrete measures of quality,” they write. Measures such as consistency in reviews, for example, would be helpful — did most papers get approved with mixed reviews or with flying colors?
“There is too little sound research on journal peer review; this creates a paradox whereby science journals do not apply the rigorous standards they employ in the evaluation of manuscripts to their own peer review practices.”
Lee and Moher propose that publishers spend 1 percent of their budgets on research into the effectiveness of their peer review systems — a number based on what the Human Genome Project spent to investigate the ethical, legal, and social implications of its efforts.
The obvious question here, of course, is: Why would publishers spend money that, at least so far, they haven’t felt the need to shell out? As the saying goes, why buy the cow when the milk’s free?
But Lee and Moher offer a few reasons that publishers ought to find compelling. The first involves fighting off incursions from predatory outfits that promise quality peer review on par with legitimate journals but rarely deliver. Being able to point to data showing superior reviewing would be a boon for non-predatory outfits. One attempt, PRE — or Peer Review Evaluation — has been around for a few years now. (Disclosure: One of us, I.O., was an advisor to PRE before it was acquired by the American Association for the Advancement of Science.)
Similarly, journals are gradually starting to look beyond impact factor as the most important signal of quality. Strong peer review could join emerging metrics like reproducibility and the willingness to share data as indicators that one journal is more reliable than another. We’ve even suggested a Transparency Index.
Until journals and publishers start taking a closer look at their own peer review processes, Lee and Moher write, “inadequately reported research will continue to waste time and resources invested by authors, reviewers, journals, academic institutions, funders, study participants, and readers — and limit the credibility and integrity of science.”
Fortunately, there have been some attempts to pry open the black box of peer review. In a baby step, a group of editors at the British Journal of Surgery created an online forum that allowed manuscripts to be peer-reviewed in the open. In a paper last month describing the experiment, published in PLOS ONE, the editors say the results were mixed. “Open online peer review is feasible in this setting,” they concluded, “but it attracts few reviews, of lower quality than conventional peer reviews.” (However the comparison may have not been quite fair, as Richard Smith, former editor of the British Medical Journal, wrote in response.)
Still, it’s perfect timing to keep this discussion alive and well: Later this summer the world’s small band of scholars who study peer review will gather in Chicago for the Peer Review Congress, which is held every four years.
Scientists will present their studies on what is and isn’t working about peer review, and novel ways to fix it. But just think how much more they’d have to go off of if journals pried open their black boxes and let some data out.