
The results of bioscience research are among the great achievements of the modern world. From them have come deep insights into the fundamental nature of life and effective means to prevent and treat disease, with many more on the way. But flaws in peer review, a critical step in communicating science, hamper its progress.
Communicating the results of basic science, clinical trials, observational studies, case reports, and the like is essential to the forward movement of science. The peer review system is at the core of this process. It works like this: a journal editor asks outside experts to critique a manuscript and assess its suitability for publication, with or without modifications. Editors use these peer reviews to help determine whether or not to publish a report.
The peer review process has existed for more than 150 years. It became routine in its current form after World War II, as funding for bioscience research exploded along with specialization and professionalization in the bioscience community.
In earlier times, editors evaluated manuscripts submitted for publication and rendered decisions themselves, asking for informal consultations as they saw fit. Today, the scope and diversity of science demands input from expert reviewers to enhance the quality and reliability of published reports.
Two common features of the current peer review system subvert the goals of science, and should be changed: The product of peer reviews generally can’t be seen by the scientific community, and reviewers are almost always anonymous. Although the vast majority of scientists living today have known only this approach, there is no compelling reason to continue using it and many reasons to revise it.
Today, the only people who are privy to the results of a peer review are the authors and the journal’s editors. I strongly believe such reviews should be part of the public record of science, just as the final published reports are. Why? There is a huge variation in the quality of reviews, from deep and insightful to shallow and misinformed. When peer review is cloaked in secrecy, there are limited incentives for performing high-quality reviews. That allows bias, carelessness, conflict of interest, and other deficiencies to persist without a way to penalize those who generate inadequate reviews.
Reading a paper in “final form” without access to the back-and-forth of reviews and editorial commentary deprives readers of key insights into how the research was conducted. Instead, we see a tidy but inaccurate portrayal that misrepresents the messy path that research often takes in going from the lab or clinic to publication. As a consequence, our ability to properly educate scientific trainees suffers, and our capacity to provide an accurate history of discovery is impaired.
Finally, public access to peer reviews would make it possible to examine the factors that influence the quality of peer review and editorial decision-making. With hidden reviews, such research is impossible, limiting needed improvements to the process.
Are there advantages to hidden reviews that counterbalance these objections? In an earlier era of print-only medical journals, space was limited by cost. Publishing peer reviews would have been expensive. Today, with online publishing essentially universal and the cost of posting a peer reviewer’s comments negligible, that objection disappears. Apart from this, I can think of no other advantage to keeping peer reviews private — yet tradition and inertia sustain this practice.
What about revealing reviewer’s identities? Though this proposal is more controversial, there are powerful reasons to support it as well. Just as reviewer anonymity provides incentives for shoddy or inappropriate reviews, reviewer identification discourages such work.
Providing a solid peer review takes time and energy, and reviewers receive no financial compensation for their work. Identifying reviewers would at least make it possible to recognize the quality and impact of reviews when faculty members are evaluated for promotion. Consider a reviewer whose critiques and suggestions enabled authors to transform their submission from an inferior report to an important contribution. Shouldn’t the reviewer be credited with that real and documentable contribution? That’s not possible today.
And just as with making reviews public, making reviewers public would enhance research into the factors contributing to the quality of reviews.
Over the past 30 years, peer review reformers have argued for these and other changes. One controlled experiment involving the BMJ showed that when properly alerted, reviewers accepted the loss of anonymity and produced reviews of undiminished quality. Quite a few other journals are testing the waters, with promising results. But the vast majority of journals continue to employ hidden and anonymous peer review and show no inclination to give it up.
Despite the evident advantages of being more transparent about peer review, some editors believe that their editorial oversight compensates for deficiencies in the quality of peer reviews. That may be true for some journals, but it is far from universal and it’s impossible to independently evaluate.
Instituting the two changes I have described might entail added work and cost that journal editors would rather avoid. Some scientists might also resist these changes. A prominent reason for their resistance is hardly complimentary to the scientific community: Some reviewers, especially junior scientists, fear that critical but honest reviews of work by senior scientists in the field would subject them to retribution by these senior figures, who might someday be asked to review their grants, papers, or promotions. Faced with such fears, they might decline requests to do peer reviews, or provide less honest reviews seeking to pander to the authors.
This isn’t just a hypothetical concern — I have heard it from many faculty members at Harvard Medical School, as well as from some journal editors. To the extent it is real, this brings shame on a profession committed at its core to the pursuit of knowledge and truth. Rather than accepting this outcome as unavoidable and countering with secrecy and its adverse consequences, this problem must be addressed through a variety of concerted approaches.
Leaders in biomedicine should take seriously the need to improve the processes we use to evaluate and publish science, which are now hobbled by lack of transparency. Concerns about the irreproducibility of bioscience research suggest that many other aspects of the research ecosystem need improvement, from how we educate scientists in research methodology and ethics to how we apportion research funding and the criteria we employ for appointment, promotion, and recognition.
But our approach to journal peer review is especially ripe for change. I urge the science publishing industry, the academic community whose research sustains it, and the organizations that fund it to cooperate in considering and responding constructively to these issues.
Jeffrey S. Flier, MD is the Harvard University Distinguished Service Professor, the Higginson Professor of Physiology and Medicine, and the former dean of Harvard Medical School.
Secure pseudonyms available with Internet technologies provide a clean solution to the problems of protection and accountability:
Extended abstract (5 min. read):
Stodolsky, D. S. (2002). Computer-network based democracy: Scientific communication as a basis for governance. Proceedings of the 3rd International Workshop on Knowledge Management in e-Government, 7, 127-137.
https://sites.google.com/a/secureid.net/dss/curriculum-vita/computer-network-based-democracy
Comprehensive
Stodolsky, D. S. (1995). Consensus Journals: Invitational journals based upon peer review. The Information Society, 11(4). [1994 version in N. P. Gleditsch, P. H. Enckell, & J. Burchardt (Eds.), Det videnskabelige tidsskrift (The scientific journal) (pp. 151-160). Copenhagen: Nordic Council of Ministers. (Tema NORD 1994: 574)]
https://sites.google.com/a/secureid.net/dss/consensus-journals
I’m slightly biased, but ScienceOpen already employs most of the solutions to peer review outlined above: http://about.scienceopen.com/peer-review-guidelines/
There are a number of realistic steps that can be taken to improve peer review. At EMBO Press we post referee comments, editorial decision letters and author responses in full; referees are invited to sign their reports, but we leave the decision up to them. The reports are very useful even without revealing ref. identity: they add transparency and accountability – they provide a rich set of expert views on every paper and a great teaching tool to scientists new to peer review. We hope in the future funders and institutions will be able to take this information into account when they evaluate a referee’s output – that will be the best incentive to ensure improved peer review.
We also get all the referees to comment on each other’s reports and often consult with the authors before making an editorial decision. This is a powerful mechanism to balance referee opinion.
for more information, see http://emboj.embopress.org/content/33/1/1
I agree with Dr. Flier’s recommendations. All too often in medical journals, we see papers that use sloppy statistical methods. It appears that many or all of the reviewers base their review on whether they agree with the results. If the reviewers agree with the results, then they ignore any problems with statistical methods. Conversely, when readers disagree with the results, then they often point out methodological problems that do not exist. Dr. Flier’s suggestion will hold reviewers accountable, and this will improve medical research.
Patients should be able to review the papers too. We are not lab rats. We can add valuable context to the research. A quick review of my medical records says people of science often misinterpret what patients are saying and when we call doctors out on it we are dismissed. People with degrees being dismissive of patients often results in misdiagnosis and bad research.
VM, at The BMJ we involve patients in the peer review of research and education articles. We also ask authors of research articles to say whether and how they have involved patients in their research. You can learn more about the Patient Partnership project here: http://www.bmj.com/about-bmj/resources-reviewers/guidance-patient-reviewers
Some medical journals already do this. At The BMJ we’ve had open peer review for many years. Since 2014 we’ve also published prepublication histories for most accepted research papers. This includes signed reviews, all versions of the article, the study protocol if the paper reports a clinical trial, the report from The BMJ’s manuscript committee meeting where decisions are made about what to publish, as well as the authors’ responses to comments from editors and peer reviewers. In the editorial announcing this policy (http://www.bmj.com/content/349/bmj.g5394.long) we wrote that
“Such open peer review should increase the accountability of reviewers and editors, at least to some extent. Importantly, it will also give due credit and prominence to the vital work of peer reviewers. At present, peer review activities are under-recognised in the academic community.” With support from influential people like Dean Flier, perhaps more journals will follow suit.
“Faced with such fears, they might decline requests to do peer reviews, or provide less honest reviews seeking to pander to the authors.”
I think this is a far larger problem than Jeffrey admits. Open peer review would simply benefit senior and well connected researchers, as well as researchers at prestigious institutions (e.g., HMS).
I personally think double-blind peer review would be ideal but hard to execute in practice, i.e., it is very easy identify authors from the text/references.
Anonymous but public peer review has merits. The fact that a review will be made public with the identity of the reviewer maintained by the journal would certainly help motivate reviewers to be more thorough in their work. In this case, an editor could search the journal archives for prior reviews before selecting a given reviewer. Also, the journal could (and should) make editorial rejections of manuscripts by reviewers who have written inappropriate or abusive reviews in the past. As an additional benefit, folks could be required to include a reference to the articles for which they claim peer review service on CVs or internal promotion documents.
How do you provide a reference for a paper that hasn’t been published after you’ve reviewed it?
Peer review is outdated and broken in its current form; it’s time for a radical overhaul.
Preprint servers where authors can post their papers prior to publication offer a easy fix to the first problem. Authors could post reviews and editorial decision letters together with their preprint. This would allow for public access to the review and editorial process.
By that same logic, departmental deliberation in the promotion and tenure process should also be public, as well as the machinations of evaluative panels at the campus level. Our letters of reference for students should be public. Our ballots in the voting booth, too.
Dr. Flyer acknowledges that a reviewer’s fear of retribution “isn’t just a hypothetical concern”. He suggests that the “problem must be addressed through a variety of concerted approaches”. This is a great strategy which will also end global poverty as soon as someone comes up with the specifics. I have been both a named and an anonymous reviewer and I can attest that signing your review of a paper written by an authority in your own field of research dissuades you from being very critical. It is, in addition, as strong incentive to give the authors a pass as it will surely be remembered as a favor. Dr. Flyer also makes the argument that the loss of anonymity in a study “produced reviews of undiminished quality”. However, the metrics of this study (J Clin Epidemiol 1999;52:625 | PMID: 10391655) was limited to describing whether predefined elements were included in the peer review. The study did not show that loss of anonymity did not produce bias, just like a trial is not necessarily fair because it includes cross-examination. Loss of reviewer anonymity will inevitably come with strong incentives to modify opinion based on the identity of the authors. If you have pondered the argument of open-carry proponents that “an armed society is a polite society” you will understand why peer review needs to remain anonymous.