
With last week’s retractions of two Covid-19 papers from a pair of the world’s top medical journals, the scientific community is once again wrestling with the question that arises any time a high-profile publication blows up: Could this have been prevented?
Entire forests have been felled so scholars can write papers on “the flawed process” of peer review, in which journal editors ask (usually three) outside experts to read a manuscript for rigor, methodological soundness, consistency, and overall quality. Peer review is rife with gender bias. Reviewers try to block competitors’ papers. They steal ideas. They favor authors from prestigious institutions. The process is hardly better than chance at keeping bad studies from being published. It does little to improve papers.
But unlike famous past episodes of research that turned out to be based on fraudulent data — such as two bombshell studies on embryonic stem cells in 2004 and 2005, a practice-changing paper on preventing bone fractures, or hundreds of studies chronicled by Retraction Watch — the latest round of discussion comes at a time when journals are experimenting with different forms of peer review in a bid to improve the process and stop flawed papers from slipping through to publication.
When researchers gather their own data, good peer reviewers can evaluate that process. However, when data is collected in an independent RWE database, how can reviewers get needed visibility into data collection? What assessment guidelines, policies, and watchdog groups are in place to help researchers vet the quality of real world evidence (RWE) databases?
The publication of these two papers has less to do with flawed peer review than with process. As a former managing editor at a smaller journal, I would have checked out the Surgisphere website on a first pass for manuscript suitability. Once I saw no listings of personnel, contact info and a blog that linked only to investing information from an outside company, that would have been a red flag to pass on the the editor and to the peer reviewers.
You misrepresent the conclusions of several of the studies you cite.
You say, “The process is hardly better than chance at keeping bad studies from being published”, citing Rothwell and Martyn’s 2000 study, “Reproducibility of peer review in clinical neuroscience.”
That study does NOT say that peer review was “hardly better than chance” at keeping bad studies from being published. The study did not even ask that question.
Instead, it looked at whether multiple reviewers’ assessments agreed with each other. What the authors actually found — unsurprisingly in my view — was that expert assessments often disagreed with one another. In my decades of experience, it is common for one reviewer (who, for example, has checked the work carefully) to disagree with another (who, for example, has not bothered to really look closely). Editors will usually follow the advice of the reviewer who has taken a closer look, thus often weeding out weak or suspect papers. The process isn’t perfect, but it does work. Just look at the quality of papers in journals with weak or non-existent peer review and you’ll see what I mean.
You also say that the peer review process “does little to improve papers”, citing Jefferson et al. (2002), “Effects of editorial peer review: a systematic review”. But again, the paper doesn’t say that. It says almost the opposite.
What the paper says, exactly, is “Two studies assessed the effects of peer review on study report quality. Both studies showed a beneficial effect, but results may again have limited generalizability because of atypical settings (both journals studied are well resourced and keen on improving quality).”
Peer review and its potential alternatives deserve scrutiny, but this process is not helped by misrepresenting what existing studies actually say.
Your statements about peer review would not, themselves, pass rigorous peer review.
Thanks for the nice paper.
It seems to me that you tend te reduce “what scientists can do” to “what journals can do”,
putting away what has been the moving force in these two retraction cases, but also more generally on all post-publication peer review.
So I would say readers as a collective, sometimes very well organized (the 201 colleagues writing to The Lancet) have done a lot and still do.
In the COVID-19 crisis, journals have been marginalized, for the better or worse. See https://polecopub.hypotheses.org/2017
Best,
“Experts” have hardly covered themselves with glory in this episode. The recent burst of idiocy that essentially says that virus mitigation rules don’t matter if African Americans wish to protest but do if they want to attend church or hold a funeral will produce a lot of skepticism, to say the least.
Politics. Namely, TDS.