Skip to Main Content

As anyone who has ever played pickup basketball knows, allowing players to call fouls on themselves works pretty well to keep the court clean. So what about in science?

That approach, in essence, is what Daniele Fanelli is calling for as a way to encourage researchers to report their own mistakes. Doing so, he argued in Nature this week, could make retracting papers for honest error less punitive and more palatable — and lead to a much cleaner scientific literature.

advertisement

Fanelli, a researcher at Stanford who has contributed to recent studies of misconduct, believes it’s time for a middle ground when it comes to correcting the scientific record: Allow researchers to collectively — by agreement of all the authors — “self-retract” papers marred by mistakes.

Evidence to date shows that about two-thirds of the roughly 600 retractions per year result from misconduct, and 20 percent or so from honest error. But the true share of retractions for error should be substantially greater, Fanelli said.

That fraction is artificially low, Fanelli wrote, because retractions “are often a source of dispute among authors and a legal headache for journal editors. The recalcitrance of scientists asked to retract work is not surprising. Even when they are honest and proactive, they have much to lose: a paper, their time and perhaps their reputation.” And that passage is borne out by the experience of one team who tried to clean up a small part of the literature.

advertisement

(But on the bright side, some research indicates that scientists who retract for honest error are not less likely to have future studies cited, a sort of “trust dividend” compared with what happens to researchers who retract for fraud.)

The reason self-retraction would work well, Fanelli argued, is because journals are halfway there already. In general, when a retraction is made for honest error, all authors end up signing it. That’s not the case for retractions involving misconduct; generally, bad actors won’t sign — it’s an admission of guilt — while their duped co-authors are desperate to distance themselves from the tainted work. So in some ways, Fanelli’s suggestion would simply reinforce typical behavior by labeling it.

Lest doing the right thing isn’t quite incentive enough, Fanelli threw in this carrot: Self-retractions would be a kind of retraction-plus, serving as quasi-publications that would themselves be citable. “Self-retractions should be considered legitimate publications that scientists would treat as evidence of integrity,” he wrote. “Self-retractions from prestigious journals would be valued more highly, because they imply that a higher sacrifice was paid for the common good.”

Fanelli acknowledged that his model is vulnerable to a hack by fraudsters who, facing the prospect of detection, would preemptively retract a study to avoid scrutiny. But those instances would almost certainly be quite rare — and so what? The net effect is that the scientific literature would become cleaner. Which, in the end, is what this game is all about.

The readers of Retraction Watch apparently agree with Fanelli, at least according to an informal and highly unscientific poll we conducted this week. About 90 percent of the hundreds of readers who’ve responded said that yes, “journals should have a category of ‘self-retraction’ for honest errors.”

And Fanelli went one further, suggesting a sort of (dare we say it) amnesty period to test the model. “It would not be unholy to grant a year of ‘scientific jubilee,’ during which journal editors allowed authors to self-retract papers, no questions asked,” he wrote. “The literature would be purged, repentant scientists would be rewarded, and those who had sinned, blessed with a second chance, would avoid future temptation.”

Sounds worth trying.