Skip to Main Content

If a burst pipe in your house is flooding your basement, you’re probably going to be more worried about that than the couple termites you previously spotted. But multiply those termites times a thousand and suddenly the bigger threat to your house might be, well, the little things.

The same holds true for science. Science fraud draws urgent attention whenever it comes to light, the equivalent of a busted pipe emergency. But it turns out, most scientists think it’s a far lesser threat to their field than the small, but legion, instances of underreporting of negative findings and scientists’ use of shoddy methodology.


And — although it may be surprising coming from two people who run a blog that often focuses on scientific misconduct — we agree.

But first the findings. Researchers in the Netherlands asked working scientists around the world to rank a list of 60 misbehaviors by their impact on truth, trust in science, how often they occur, and how preventable those actions might be. They then devised a ranking for these behaviors that combined how often they occur and their impact — a sort of on-base plus slugging average that measures their overall effect on the field.

Not surprisingly, fabrication of data scored the highest for its effect on truth and public trust in science. But those cases are quite rare — and detected cases are, by definition, even rarer. As a result, it didn’t even make the top five.


Rather, “our ranking results seem to suggest that selective reporting, selective citing, and flaws in quality assurance and mentoring are the major evils of modern research,” the authors wrote in Research Integrity and Peer Review. “A picture emerges not of concern about wholesale fraud but of profound concerns that many scientists may be cutting corners and engage in sloppy science, possibly with a view to get more positive and more spectacular results that will be easier to publish in high-impact journals and will attract many citations.”

Sloppiness and fraud often share a mother: the imperative to publish, and ideally in high-profile journals. And it is shockingly common: While about 2 percent of researchers admit to committing fraud in their research, as many as 34 percent of scientists say they have cut corners or taken similarly questionable steps in their work.

So what should we do?

The authors suggest throwing some science at the problem. “All attempts to fight sloppy science and worse should ideally be accompanied by sound evaluation to assess their effects,” they wrote.

But some approaches that have already been evaluated don’t appear to work so well. For instance, efforts to reduce plagiarism may help, but those to reduce the incidence of fraud, not so much.

More study of interventions is definitely needed. And while universities continue to sponsor such initiatives, they should also take a hard look at their own roles in promoting “publish or perish.” After all, their tenure and promotion committees often take the easy route of counting papers to judge a body of work, instead of doing the hard work of plowing through the papers themselves. Until they do, we’re not likely to make much headway on the small but insidious problems plaguing science.

  • Under or non-reporting of negative results is an issue that cannot be solved. Imagine if all the negative results were published, flawed experiments included. How many publications would you need for that? No amount of pages would satisfy the needs.

    The real issue is “Who should be allowed to publish?”. Back in the day there used to be some self-censorship from the labs. The US, and other countries, have developed degree minting systems, instead of academic institutions. This has led to a parallel system of phony scientific journals. People with the writing and scientific skills of a high-school student are granted PhD’s. All these so-called PhD holders compete for publications and citations. Fake science is a much bigger issue than non-reported negative results of valid experiments.

  • In my mind fraud includes selectively reporting results in order to deceive. It also includes knowingly using flawed methodology which results in false or less robust conclusions without good reason, such as budget contraints, and without clearly reporting the limitations and uncertainties of the study. A rose by any other name. I lost a job, though not in the biological sciences, because I refused a boss’s directive to commit such statistical fraud for purposes of deceiving top management. I believe we need better societal methodologies for reviewing methodologies, both before and after scientific studies. Perhaps we need a new profession of scientific auditors.

  • It seems that a number of scientists has adopted the methodology of Stalin’s pet scientist Trofim Denisovich Lysenko, methodology that evaluates scientific theories their conformance to a political ideology and suppresses dissent.

  • The biggest problem is systemic: science is not self supporting. And nobody is going to discover something that embarrasses his patron. So whoever pays the bills dictates what science is to be. I don’t see any way to correct that.

  • As a bench scientist in the pharmaceutical industry, I often have to retrain colleagues that are fresh from academia on how to conduct a rigorous experiment and how to validate key findings with orthogonal methods. In my job the results don’t have to be spectacular but they do have to be robust and repeatable. Negative findings are viewed as being just as important and data quality is considered paramount.
    It’s not that my academic colleagues are deliberately sloppy, it’s often that they haven’t been taught how to validate their experimental methodology nor how to assess the different sources of experimental variability in order to assure that positive results are above the noise. This sort of QA work is unglamorous, time consuming and often expensive thus I can see why some academics with limited budgets and high pressure to publish neglect this important part of the scientific process.

    • Funny story
      Over the years, I have witnessed a number of colleagues moving to industry, and make even “brilliant” careers, while I was leading myself to a dead end career in academia.
      A common trait of all of them (literally) was to posses at least one of those requisite of, cutting corners, sloppy science or fraud. Or even all of them together. I can make names and description if you wish.
      I am assuming you started your journey in academia too. But, I am sure you are an exception.

  • Nice words

    But the reality is that the few that try to do things in the right way end up out of job, while the many applying the rules of cutting corners and engage in sloppy science all the way to fraud have good to brilliant careers.
    After all, both industry and academia rewards those behaviors and the people higher in the rankings most likely made their career following that path.

    • Summed up perfectly by ugo – and it is all perpetuated by the publish or perish system we work under. Very demoralizing for early career researchers such as myself knowing this is what we have to do

    • Very true Ugo.
      Patrick, do not forget that it is very demoralising also for mid- and late career researchers. When I entered this career 25 years ago, it was not perfect, but at least science was about “exploring nature and finding the truth”. This philosophy, I could say this poesy, was the very reason why I got in this career in the first place. The second reason being enough chance to get a position. Nowadays, everything has turned into “twisting the truth and publishing in Nature”. And scientists now get a position/promotion only if they have published enough crap. Nobody dare saying anything: some because they are true crooks, some because they are naive or prefer not to see anything and most because as soon as you speak up, you are automatically and instantly rejected by the system and left without a job. A true totalitarian system.

Comments are closed.