
The results of bioscience research are among the great achievements of the modern world. From them have come deep insights into the fundamental nature of life and effective means to prevent and treat disease, with many more on the way. But flaws in peer review, a critical step in communicating science, hamper its progress.
Communicating the results of basic science, clinical trials, observational studies, case reports, and the like is essential to the forward movement of science. The peer review system is at the core of this process. It works like this: a journal editor asks outside experts to critique a manuscript and assess its suitability for publication, with or without modifications. Editors use these peer reviews to help determine whether or not to publish a report.
The peer review process has existed for more than 150 years. It became routine in its current form after World War II, as funding for bioscience research exploded along with specialization and professionalization in the bioscience community.
In earlier times, editors evaluated manuscripts submitted for publication and rendered decisions themselves, asking for informal consultations as they saw fit. Today, the scope and diversity of science demands input from expert reviewers to enhance the quality and reliability of published reports.
Two common features of the current peer review system subvert the goals of science, and should be changed: The product of peer reviews generally can’t be seen by the scientific community, and reviewers are almost always anonymous. Although the vast majority of scientists living today have known only this approach, there is no compelling reason to continue using it and many reasons to revise it.
Today, the only people who are privy to the results of a peer review are the authors and the journal’s editors. I strongly believe such reviews should be part of the public record of science, just as the final published reports are. Why? There is a huge variation in the quality of reviews, from deep and insightful to shallow and misinformed. When peer review is cloaked in secrecy, there are limited incentives for performing high-quality reviews. That allows bias, carelessness, conflict of interest, and other deficiencies to persist without a way to penalize those who generate inadequate reviews.
Reading a paper in “final form” without access to the back-and-forth of reviews and editorial commentary deprives readers of key insights into how the research was conducted. Instead, we see a tidy but inaccurate portrayal that misrepresents the messy path that research often takes in going from the lab or clinic to publication. As a consequence, our ability to properly educate scientific trainees suffers, and our capacity to provide an accurate history of discovery is impaired.
Finally, public access to peer reviews would make it possible to examine the factors that influence the quality of peer review and editorial decision-making. With hidden reviews, such research is impossible, limiting needed improvements to the process.
Are there advantages to hidden reviews that counterbalance these objections? In an earlier era of print-only medical journals, space was limited by cost. Publishing peer reviews would have been expensive. Today, with online publishing essentially universal and the cost of posting a peer reviewer’s comments negligible, that objection disappears. Apart from this, I can think of no other advantage to keeping peer reviews private — yet tradition and inertia sustain this practice.
What about revealing reviewer’s identities? Though this proposal is more controversial, there are powerful reasons to support it as well. Just as reviewer anonymity provides incentives for shoddy or inappropriate reviews, reviewer identification discourages such work.
Providing a solid peer review takes time and energy, and reviewers receive no financial compensation for their work. Identifying reviewers would at least make it possible to recognize the quality and impact of reviews when faculty members are evaluated for promotion. Consider a reviewer whose critiques and suggestions enabled authors to transform their submission from an inferior report to an important contribution. Shouldn’t the reviewer be credited with that real and documentable contribution? That’s not possible today.
And just as with making reviews public, making reviewers public would enhance research into the factors contributing to the quality of reviews.
Over the past 30 years, peer review reformers have argued for these and other changes. One controlled experiment involving the BMJ showed that when properly alerted, reviewers accepted the loss of anonymity and produced reviews of undiminished quality. Quite a few other journals are testing the waters, with promising results. But the vast majority of journals continue to employ hidden and anonymous peer review and show no inclination to give it up.
Despite the evident advantages of being more transparent about peer review, some editors believe that their editorial oversight compensates for deficiencies in the quality of peer reviews. That may be true for some journals, but it is far from universal and it’s impossible to independently evaluate.
Instituting the two changes I have described might entail added work and cost that journal editors would rather avoid. Some scientists might also resist these changes. A prominent reason for their resistance is hardly complimentary to the scientific community: Some reviewers, especially junior scientists, fear that critical but honest reviews of work by senior scientists in the field would subject them to retribution by these senior figures, who might someday be asked to review their grants, papers, or promotions. Faced with such fears, they might decline requests to do peer reviews, or provide less honest reviews seeking to pander to the authors.
This isn’t just a hypothetical concern — I have heard it from many faculty members at Harvard Medical School, as well as from some journal editors. To the extent it is real, this brings shame on a profession committed at its core to the pursuit of knowledge and truth. Rather than accepting this outcome as unavoidable and countering with secrecy and its adverse consequences, this problem must be addressed through a variety of concerted approaches.
Leaders in biomedicine should take seriously the need to improve the processes we use to evaluate and publish science, which are now hobbled by lack of transparency. Concerns about the irreproducibility of bioscience research suggest that many other aspects of the research ecosystem need improvement, from how we educate scientists in research methodology and ethics to how we apportion research funding and the criteria we employ for appointment, promotion, and recognition.
But our approach to journal peer review is especially ripe for change. I urge the science publishing industry, the academic community whose research sustains it, and the organizations that fund it to cooperate in considering and responding constructively to these issues.
Jeffrey S. Flier, MD is the Harvard University Distinguished Service Professor, the Higginson Professor of Physiology and Medicine, and the former dean of Harvard Medical School.
I am surprised that f1000research.com receives no explicit mention in the post, and has not been cited in the comments to date. My understanding is that F1000 has been a leader in open peer review. Their particular emphasis on immediate publication followed by transparent post-publication peer review also warrants special mention, since this addresses interminable publication delays which are another serious problem in the academic publication enterprise.
The study quoted had 55% of reviewers refuse to participate in an open review, although they didn’t ask if they would have participated if it were closed. At best, you can conclude as many as half of reviewers would refuse to participate in open reviews. Anyone without tenure would have to be dumber than ten dogs in a sack to agree to it, since the whole point of reviewers being anonymous is to allow them to give honest feedback without fear of retribution.
And, that also found open reviewing didn’t increase the quality of reviews either. But the author continues to argue that it should increase the quality of reviews, despite the evidence to the contrary.
Did the study test whether the reviewers would participate with a proviso that the review would be “open(ed)” 1) only upon the MS being published, and 2) then perhaps If they chose to not “opt out” of having their names credited?
Did not PNAS once cite at the end of a publication the name of NAS member who served as the sponsor of the publication? Anonymity (by which I mean “confidentiality) only goes so far: In my days “At the Bench” it was not difficult to intuit who had reviewed your paper just from the nature of the comments (because you had heard them openly at meetings).
Some may find this literature of interest to supplement Dr. Flier’s opinions.
Teixeira da Silva, J.A., Dobránszki, J. (2015) Problems with traditional science publishing and finding a wider niche for post-publication peer review. Accountability in Research: Policies and Quality Assurance 22(1): 22-40.
http://www.tandfonline.com/doi/full/10.1080/08989621.2014.899909
DOI: 10.1080/08989621.2014.899909
Other literature of pertinence at:
https://www.researchgate.net/profile/Jaime_Teixeira_Da_Silva
Within Dr. Flier’s provocative suggestions is the refreshing idea that small adjustments in the conduct and the reviewing of research in the digital age may be more productive for research integrity than the abundant exhortations to study/improve behavior. Falsified images underlie a lot questioned research. A common rhetorical question around the table during discussion of an image case at ORI was “where were the reviewers” of this turkey?” Indeed, the number of retractions would fall dramatically if only the Peer Reviewers (not to mention coauthors!) had been motivated to look at the material as carefully as the anonymous commenters on PubPeer. Dr. Flier is right that Journals could adopt steps that would get the reviewers to take their job more seriously. One step toward that goal might be to publish the names of the reviewers at the end of the publication, or more innocuously, at least show the links between reviewers and paper at the end of the completed volume series. That would also enable interested readers to judge whether irrelevant papers from Dr. X’s lab weren’t being gratuitously cited because he/she was a reviewer. Drummond Rennie once rebutted (at a joint ORI-editors conference) that this idea would discourage reviewer recruitment. However, the anonymity of the reviewers of a rejected manuscript need not be revealed, and the identity of those with critical comments about an accepted paper could be identified as being withheld by ‘opt-out.” Perhaps there is a “work-around” for Dr. Flier’s valid concern about junior scientists.