Skip to Main Content

Peer review is a thankless task. Almost entirely uncompensated and usually anonymous, it’s considered a professional obligation, a good for science in general. So scientists go along, accepting assignments from editors and effectively killing any chance of binge-watching their favorite show on Netflix after the kids have gone to bed.

The system is also a way for journals and publishers to use the crowd to do much of the work of editors. In other words, all off the benefits accrue in a single direction, unless you count the warm feeling reviewers might get from doing something you think you should be doing.


Publishers argue that what goes around comes around: You take the time to conscientiously review your peer because one day your peer will do the same for you; it’s the Golden Rule. Problem is, there’s gold in that system, but it’s not the karmic kind. Publishers have used the free model as a way to bloat their roster of titles or the number of papers they publish, diluting the quality of academic papers while they grow their bottom lines.

Some have tried ways to make the ledger a bit more even. For many years, for instance, the Journal of the American Medical Association and the New England Journal of Medicine have printed lists of reviewers as annual “thank yous.” In a trial, Nature recently began publishing reviewers’ names on published papers, as long as authors and reviewers gave permission. A company called Publons allows researchers to collect badges and public recognition for their work reviewing, and has received — wait for it — good reviews.

And Elsevier, which publishes some 3,200 journals, is now piloting a way for reviewers to get props from their peers and editors. The plan, according to a note sent to reviewers, is to “publish on the journal’s website a list of reviewers with their full names and their relative ranking and percentile in how quickly they submitted their report (computed as days between the invitation to review and the submission of a referee report).”


Only the reviewers who make the top 80 percent in terms of promptness will make the ranking list; those in the bottom 20 percent won’t have to see their names on a roster of shame.

Leaving aside whether the pilot — which is raising eyebrows among some scientists on social media — will be an effective incentive for scientists to serve as reviewers, it has another problem: Prioritizing speed in the review process is fine if the goal is throughput, but is it good for promoting quality science?

The answer is hardly. Rapid reviews can be shoddy, as Elsevier knows well from a case in one of its own journals last year. And given how many problems readers are identifying on sites like PubPeer once papers are published, does pushing for speed really make sense?

That leaves a final kind of incentive that some have experimented with: Money. “We need to abandon the belief that there is only one peer review market that operates entirely on volunteer labor,” Philip Davis, a publishing consultant, told The Times Higher Education earlier this year. The open-access journal Collabra is offering reviewers and editors modest payments based on the fees it charges authors. Scholars can take the cash, or, if they prefer, donate it to a fund Collabra has set up to help authors defray publication charges. Similarly, the small United Kingdom-based publisher Veruscript earlier this year announced plans to pay the reviewers for its four new journals. As with Collabra, the money will come from the article fees Veruscript charges authors.

These ideas might only be possible for small upstarts to attempt. After all, a little napkin math quickly reveals how costly paid reviewing can be — and how seductive the free version is — for big publishers. Elsevier has some 3,200 journals in its catalog. Assuming each of those titles considers, on average, 100 manuscripts per year that require three reviewers per article, that’s just shy of a million reviews annually.

If Elsevier were to pay, say, $100 per review (a laughably small sum for the effort, which researchers say takes a half day when done properly), that’s $100 million per year. (Though critics would point out that that amount is a tiny fraction of the company’s annual profits.) The company could offer payments to individual reviewers, or even pool the funds to support research projects by its authors — something we imagine researchers could get behind.

So, it’s time for the big fish to get into the pond. Then they could write up their findings and submit the manuscript to a peer-reviewed journal — and more reviewers would get paid.

  • As an alternative to Debora’s suggestion (I admit a more egoistic idea): What if after a certain number of reviews for a journal I get a voucher to publish one manuscript free of publishing charges? Of course my manuscript would have to go through peer-review like all others.

    Not one voucher per review, but let’s say if I review 5 papers (and the editor scores my reviews high enough), I get a free paper, which in turn might pay for going to a conference.

  • The idea is indeed excellent. It does not need to be one article per review I do. I review much more papers per year than I publish so a ratio of 1 fee article per 3 reviews or even 1 to 4 would be fine.

  • How about if every reviewer could choose, per review, one paper from behind the paywall to be free for everyone to read? Then by reviewing we could be contributing to freeing up knowledge. And we would get to choose the paper (from that particular publisher) that gets freed up. They could put the reviwers name on the page advertising the now-free paper: “This paper is brought to you free of charge by the reviewing efforts of Jane Doe.”

Comments are closed.