Imagine that someone offers to give you a guard dog. When the wretched creature arrives, you find out that she is calf-high, arthritic, blind, nearly deaf, and toothless. Oh, and she can’t bark, either.
Wait, you say. This dog won’t protect me against anything!
Don’t be picky, responds your benefactor smugly. This dog is the best we’ve got.
So it goes with peer review. While some — namely, journal editors and publishers — would like us to consider it the opposable thumb of scientific publishing, the key to differentiating rigor from rubbish, some of those very same people seem to think it’s good for nothing. Here is a partial list of the things that editors, publishers, and others have told the world peer review is not designed to do:
1. Detect irresponsible practices
Don’t expect peer reviewers to figure out if authors are “using public data as if it were the author’s own, submitting papers with the same content to different journals, or submitting an article that has already been published in another language without reference to the original,” said the InterAcademy Partnership, a consortium of national scientific academies.
2. Detect fraud
“Journal editors will tell you that peer review is not designed to detect fraud — clever misinformation will sail right through no matter how scrupulous the reviews,” Dan Engber wrote in Slate in 2005.
3. Pick up plagiarism
Peer review “is not designed to pick up fraud or plagiarism, so unless those are really egregious it usually doesn’t,” according to the Rett Syndrome Research Trust.
4. Spot ethics issues
“It is not the role of the reviewer to spot ethics issues in papers,” said Jaap van Harten, executive publisher of Elsevier (the world’s largest academic imprint) in a recent interview. “It is the responsibility of the author to abide by the publishing ethics rules. Let’s look at it in a different way: If a person steals a pair of shoes from a shop, is this the fault of the shop for not protecting their goods or the shoplifter for stealing them? Of course the fault lies with the shoplifter who carried out the crime in the first place.”
5. Spot statistical flaccidity
“Peer reviewers do not check all the datasets, rerun calculations of p-values, and so forth, except in the cases where statistical reviewers are involved — and even in these cases, statistical reviewers often check the methodologies used, sample some data, and move on.” So wrote Kent Anderson, who has served as a publishing exec at several top journals, including Science and the New England Journal of Medicine, in a recent blog post.
6. Prevent really bad research from seeing the light of day
Again, Kent Anderson: “Even the most rigorous peer review at a journal cannot stop a study from being published somewhere. Peer reviewers can’t stop an author from self-promoting a published work later.”
Even when you lower expectations for peer review, it appears to come up short. Richard Smith, former editor of the BMJ, reviewed research showing that the system may be worse than no review at all, at least in biomedicine. “Peer review is supposed to be the quality assurance system for science, weeding out the scientifically unreliable and reassuring readers of journals that they can trust what they are reading,” Smith wrote. “In reality, however, it is ineffective, largely a lottery, anti-innovatory, slow, expensive, wasteful of scientific time, inefficient, easily abused, prone to bias, unable to detect fraud and irrelevant.”
So … what’s left? And are whatever scraps that remain worth the veneration peer review receives? Don’t write about anything that isn’t peer-reviewed, editors frequently admonish us journalists, even creating rules that make researchers afraid to talk to reporters before they’ve published. There’s a good chance it will turn out to be wrong. Oh? Greater than 50 percent? Because that’s the risk of preclinical research in biomedicine being wrong after it’s been peer-reviewed.
With friends like these, who needs peer review? In fact, we do need it, but not just only in the black box that happens before publication. We need continual scrutiny of findings, at sites such as PubMed Commons and PubPeer, in what is known as post-publication peer review. That’s where the action is, and where the scientific record actually gets corrected.
That poor peer review dog may not be good for guarding your house, but she’s probably great to cuddle with. But maybe get that home security system for the real deal.
Ultimately, access to the original data by independent others for reanalysis and replication purposes is the bedrock for judging the internal and external validity of published and unpublished findings. Journals have found so-call “peer review” to be a convenient way of justifying publication of questionable reports. It is the equivalent of wrapping the flag around oneself to show patriotic fervor.
Just as access by independent others to the original data for published and unpublished reports of findings for reanalysis and replication purposes, so such access to the results of peer review is required to judge their validity. In the mid-1970s, Science published an RCT of peer review that employed a structured 67 item threats-to-inference instrument by which to reach judgments about the effectiveness of peer review. See: Noble, JH Jr. Peer review: quality control of applied social research. Science 1974; 185 (13 Sept 1974): 916-921.
Why is it not possible for journals to require its peer reviewers to employ a like structured instrument to justify their approval or rejection decisions and make the anonymized results available to all and sundry for purposes of reanalysis and replication? Or, do you think, the journals would wither and die by exposure to the light?
https://www.academia.edu/27297345/Areas_of_Research_and_Preliminary_Evidence_on_Microcephaly_Guillain-Barr%C3%A9_Syndrome_and_Zika_Virus_Infection_in_the_Western_Hemisphere well this one got severely edited and ended up this in NEJM……
what can one say? The fraud in CDC is unbelievable……………….
Comments are closed.