In 2014, David Allison noticed something wonky with a paper in the journal Childhood Obesity. The article purported to find that children who regularly ate kids’ meals with toys inside — we won’t name them, but an example might rhyme with “shmappy shmeal” — were liable to consume excess calories. But Allison spotted that the researchers had incorrectly analyzed their data, causing them to exaggerate the effects by more than tenfold.
That realization prompted a letter from Allison, a biostatistician at the University of Alabama at Birmingham — which, in full disclosure, receives funding from the National Restaurant Association — and his colleagues. Several months later, the journal retracted the paper.
Hurray for science and its much-vaunted self-correcting mechanism, right?
In an article in Nature this week, Allison and colleagues report that such corrective action by a journal is far from the norm. Over a period of 18 months the group submitted corrections and retraction requests to 25 papers in a variety of publications and had one other successful retraction. A few other journals published their letters but did not retract the studies.
Mostly, though, Allison’s group found the effort a waste of time. “Too often, the process spiralled through layers of ineffective e-mails among authors, editors and unidentified journal representatives, often without any public statement added to the original article,” Allison and colleagues write.
The problem gets worse: “Some journals that acknowledged mistakes required a substantial fee to publish our letters: we were asked to spend our research dollars on correcting other people’s mistakes.” One unnamed publisher even had the gall to declare that it would exact a fee of $10,000 from authors of withdrawn papers — a draconian amount that clearly poses a disincentive to doing the right thing.
So this is the state of scientific self-correction in the 21st century. Despite the opportunities for instant access to studies, international and public scrutiny, and rapid communication, getting a paper retracted or corrected turns out to be well-nigh impossible.
We’ve seen this ourselves in forwarded email exchanges between researchers bringing problems to journals’ attention and editors who, let’s say, pay less attention than they ought to. Some 500 to 600 articles are retracted each year, and while we wouldn’t hazard a specific guess as to how many more ought to be pulled, we’re confident that number is a multiple greater than what we’re seeing.
Take the case of cell lines, widely used in biomedical research for everything from testing drugs to seeing how our bodies work. It turns out that many researchers who did those studies were using the wrong cell lines, meaning that what seemed like a promising finding in kidney tumors was in fact a finding in cervical cancer. And yet the vast majority of those papers carry nary a warning.
Allison’s group calls out what they consider six critical problems with science when it comes to self-correction:
- Editors move too slowly to correct or retract;
- it’s hard to find out where to send criticisms;
- when concerns are expressed informally, such as in comments sections of journal webpages, they rarely trigger action;
- journals are loath to take action even when confronted with “invalidating” errors;
- getting access to researchers’ raw data is a difficult and haphazard affair;
- and, the whole charge-the-whistleblower thing.
To that list we’d add lawyers. In a refreshingly open 2014 editorial, Nature admitted that, when trying to correct the record, “journals might find themselves threatened with a lawsuit for the proposed retraction itself, let alone a retraction whose statement includes any reference to misconduct.” So that’s another disincentive for them to retract.
So what to do? Allison and his colleagues call on journals, publishers, and other players to standardize and streamline. “Address readers’ concerns swiftly,” they write. “Use formal expressions of concern as an alert that work is under scrutiny — rather than for condemnation.” Their recommendations make sense, although many of them are what journal editors insist — defensively, in our experience — that they already do. (And, in fairness, a number of researchers and editors have earned a place on our “doing the right thing” list.)
In the end, Allison’s group writes, “Robust science needs robust corrections. It is time to make the process less onerous.”
To which we say: It sure is.