f a burst pipe in your house is flooding your basement, you’re probably going to be more worried about that than the couple termites you previously spotted. But multiply those termites times a thousand and suddenly the bigger threat to your house might be, well, the little things.
The same holds true for science. Science fraud draws urgent attention whenever it comes to light, the equivalent of a busted pipe emergency. But it turns out, most scientists think it’s a far lesser threat to their field than the small, but legion, instances of underreporting of negative findings and scientists’ use of shoddy methodology.
And — although it may be surprising coming from two people who run a blog that often focuses on scientific misconduct — we agree.
But first the findings. Researchers in the Netherlands asked working scientists around the world to rank a list of 60 misbehaviors by their impact on truth, trust in science, how often they occur, and how preventable those actions might be. They then devised a ranking for these behaviors that combined how often they occur and their impact — a sort of on-base plus slugging average that measures their overall effect on the field.
Not surprisingly, fabrication of data scored the highest for its effect on truth and public trust in science. But those cases are quite rare — and detected cases are, by definition, even rarer. As a result, it didn’t even make the top five.
Rather, “our ranking results seem to suggest that selective reporting, selective citing, and flaws in quality assurance and mentoring are the major evils of modern research,” the authors wrote in Research Integrity and Peer Review. “A picture emerges not of concern about wholesale fraud but of profound concerns that many scientists may be cutting corners and engage in sloppy science, possibly with a view to get more positive and more spectacular results that will be easier to publish in high-impact journals and will attract many citations.”
Sloppiness and fraud often share a mother: the imperative to publish, and ideally in high-profile journals. And it is shockingly common: While about 2 percent of researchers admit to committing fraud in their research, as many as 34 percent of scientists say they have cut corners or taken similarly questionable steps in their work.
So what should we do?
The authors suggest throwing some science at the problem. “All attempts to fight sloppy science and worse should ideally be accompanied by sound evaluation to assess their effects,” they wrote.
But some approaches that have already been evaluated don’t appear to work so well. For instance, efforts to reduce plagiarism may help, but those to reduce the incidence of fraud, not so much.
More study of interventions is definitely needed. And while universities continue to sponsor such initiatives, they should also take a hard look at their own roles in promoting “publish or perish.” After all, their tenure and promotion committees often take the easy route of counting papers to judge a body of work, instead of doing the hard work of plowing through the papers themselves. Until they do, we’re not likely to make much headway on the small but insidious problems plaguing science.