Four in 100 doesn’t seem like a big number, and in many instances it isn’t very impressive. After all, if Steph Curry sank only 4 percent of his three-pointers, the Golden State Warriors wouldn’t have set a record this year for most wins in an NBA season.
But 4 percent might trigger alarm bells in science publishing. In this case, the number is the share of published papers in biomedical fields containing at least one inappropriately duplicated image — which means that scientific results are being misrepresented to appear better than they truly are.
The finding appears in a new analysis by Elisabeth Bik, a microbiologist at Stanford University, and coauthors Ferric Fang and Arturo Casadevall. The last two have been leading figures in research integrity for some time, having contributed to a paper in 2012 that first identified misconduct as the reason for two-thirds of retractions of scientific articles.
Bik and her colleagues’ undertaking was impressive and massive. They analyzed, by hand, more than 20,600 articles that had appeared in 40 science journals between 1995 and 2014. To focus their search, they looked for three types of issues: “simple” duplications, in which researchers used the same images to represent different experiments; “duplication with repositioning,” in which authors rotated or otherwise moved an image to make it appear new in a subsequent publication; and “duplication with alteration,” in which scientists tinkered with elements of duplicated images to generate new figures.
They found that 3.8 percent, or roughly one in 25, had figures they considered “problematic,” of which roughly half appeared to represent “deliberate manipulation.”
Previous screens have found various rates of problematic images, ranging from about 1 percent to 25 percent, and the real number almost certainly is higher than what Bik and her colleagues found; they threw out examples on which they all couldn’t agree. And the problem appears to be getting worse: The researchers found that the number of dodgy images grew toward the later years of the study. They also noted a significant amount of recidivism, as authors with one flawed figure were more likely to have other papers marred by similar issues.
Bik’s work has already led to six retractions – four of them from the journal Infection and Immunity, which Fang edits. She has also notified roughly 10 institutions where repeat offenders are employed.
Should the new study come as a surprise? Perhaps not. Surveys have found that roughly 2 percent of scientists admit to committing misconduct in their research. While the 2 percent in those surveys and the 2 percent of likely intentional duplication that Bik found are not comparing apples to apples, together they suggest some uncertainty about the matter of degree, rather than a sudden discovery that — gasp — researchers can behave badly. And it is more evidence that the number of retractions in the biomedical literature — nearing 700 per year, and growing — is smaller than the number of papers that should be retracted.
What is surprising to us, however, knowing the rigor with which Bik and her colleagues have approached this and other work, is the dismissiveness that Bik’s study received from journals. Her article was rejected three times, including by two journals that declined to send the article out for peer review. For Bik, that’s a sign of defensiveness, not a reflection of the quality of the work. “No journal wants to hear a percentage of their papers is considered very bad,” Bik told Retraction Watch.
Rather than give up, the researchers published their study on a preprint server called bioRxiv. There’s a bit of irony here: Bik’s paper hasn’t been through peer review, though the many problematic ones she identified have.
Journals will now have to decide how to act on Bik’s findings. What should happen to these papers — corrections, retractions, other notices — is ultimately up to them. But here’s a bit of free advice: Ignoring the problem, as many seem willing to do, is not going to work for much longer.