ournals pin corrections on scientific articles for all sorts of reasons — from the mundane, like minor typos and wording changes, to the significant, such as errors that warrant a detailed explanation.
But the process for correcting a published article can be needlessly burdensome and time-consuming, and stories abound of scientists trying to do the right thing, noting a minor error or update to their own work, but facing hurdles — from delays to flat out denials from journals.
Now, some researchers have decided to take matters into their own hands, using a comment feature on the widely used PubMed site. Why, after all, should readers wait to learn of updates, and, in the case of potentially serious flaws that might affect scientific conclusions, continue chasing what are likely to be dead ends that could be easily turned around with new information?
Although calling the phenomenon a trend might be a bit premature, a new article by the team behind PubMed Commons — a government-hosted forum that lets users comment on papers in the Medline database — notes with enthusiasm the recent increase in the number of researchers who have used the site to alert readers to potential problems with their work.
One researcher they cite is Garret Stuber, a neurobiologist at the University of North Carolina. In February, just days after publishing a paper in Nature Neuroscience, Stuber realized that some images had been duplicated during the back-and-forth with the journal. In addition to writing to the journal, he turned to PubMed Commons to let others know about the issues, in “an effort for immediate notice and transparency to what occurred,” he told Retraction Watch. “I thought that this would be the best course of action in order to avoid any further confusion.”
Another group of researchers who noticed several errors in their 2012 article in the Journal of Clinical Endocrinology and Metabolism did double diligence — getting the journal to correct the article, and then commenting on PubMed Commons to share a link to that correction.
Those actions are important because PubMed has, over the decades, become the leading gateway for biomedical research. It’s very likely that a scientist will find an abstract on PubMed, rather than finding it somewhere else first.
As we and others have argued for some time now, scientific papers are not monuments cut from stone that, once published, cannot and should not alter. In our view, readers’ and researchers’ use of social media and online forums to point out flaws in articles is a good thing. It means the system relies less on editors and journals as gatekeepers to concerns being made public. And many editors and journals don’t seem all that interested in addressing such issues, a large part of the reason that PubPeer was created.
A close cousin, philosophically, is an idea dreamed up by some journal editors and other scholars to allow authors to amend their own papers. Under the proposed system, laid out earlier this month, researchers would have the opportunity to tag their articles with “amendments” about mistakes or other issues ranging from “insubstantial” to “complete” — the functional equivalent of a retraction. Indeed, the system is designed to replace the “correction” and “retraction” regime.
The advantage of the approach, according to proponents, is that letting authors take charge of post-publication quality control will speed the whole thing up. Journals are bogged down by procedural issues, production demands, and other constraints — including lawyers, when editors are alleging fraud — that make them about as nimble as a cruise ship on an America’s Cup course.
Of course, opponents might argue that these approaches leave too much in the hands of authors while undermining the expertise of journals and editors in handling such matters. We’re sympathetic — to a point. We can certainly envision a scenario in which unscrupulous authors dupe their peers by suggesting that their errors are benign. Daniele Fanelli, who called for “self-retractions” as a way to clean up the literature, anticipated that potential problem, too.
So, some open questions linger. What’s not an open question, though, is that science needs to be self-correcting — and the current way that works just isn’t working anymore.