MSTERDAM — “Bring it.”
That was Richard Mann’s response, back in 2012, when a friend told him to brace himself for some news.
Mann, then a researcher at Uppsala University in Sweden, had recently published a paper based on video footage of swimming prawns. To make his task simpler, Mann had instructed his computer not to process every frame of the footage, on the not-unreasonable theory that the slow-moving shrimp wouldn’t be doing anything of particular interest in the ignored frames.
But Mann’s friend was writing to let him know that there was a problem: In trying to replicate the study, he’d noticed the shortcut was a bit too short. More than that, in fact: The analysis had captured just 1 percent of the total dataset — a fatal mistake for the study.
As Mann related on Monday at the World Conference on Research Integrity, his immediate reaction was dismay. The timing of the revelation was particularly bad: He was slated to give an hourlong presentation on the research at a conference in just a few days.
But he got through it, and a rough patch following, as he wrote in a 2013 blog post about the affair: “Suffice to say I had a very drunk Skype conversation with my boss who was very good about the whole thing, I somehow gave a successful seminar despite having “CAUTION, POSSIBLY INVALID” over my most important results, and after crafting an extremely apologetic statement the paper was retracted. … I didn’t sleep very much for a few months.”
That was five years ago, and the sting of the experience has largely faded. And Mann’s story ends well, thanks in no small part to his willingness to own the embarrassing mistake from the moment he learned about it. He and his colleagues reanalyzed their data — correctly, this time — and republished their article in March 2013 in the same journal.
In other words, Mann’s story is an object lesson in how scientists should confront their errors: head-on, promptly, and transparently. What’s more, he believes that mistakes like his are almost certainly not rare — and that scientists can’t rely on reviewers to catch them. Rather, Mann is convinced that sharing code — and data — is the way to catch flaws. “The chances of peer-review catching this sort of error are somewhere between very small and non-existent,” he wrote in the blog post.
Finally, he is wary that retractions, burdened by a perceived stigma, may be discouraging researchers from acknowledging errors.
We’re sympathetic to that notion. Although retractions certainly have their place, we applaud efforts to come up with alternative methods of correcting the record that reward doing the right thing rather than only methods that punish through the removal of results. And it turns out that scientists — and at least one company — who confess to mistakes using retractions don’t tend to suffer the consequences many fear.
And researchers who make mistakes don’t have to do a Full Mann and bare their souls to the internet and an audience of more than 800 strangers at a major international conference in the Netherlands — although we’d gladly buy him a herring for his bravery. All they really need to do is correct the record.