
Placing trust in science can be easier when findings are confirmed, but a new survey finds that most scientists believe there is a “crisis” in reproducibility.
Specifically, 52 percent reported that replicating results is a “significant” problem and another 38 percent believe a “slight crisis” exists. More than 70 percent of researchers have tried and failed to reproduce another scientist’s experiments, according to Nature, which canvassed 1,576 researchers. And more than half of the respondents reported that they failed to reproduce their own experiments.
Yet one-third believe that failing to reproduce results means that a study is probably just incorrect, and most of those asked say that they continue to trust published findings. Moreover, 73 percent think that at least half of the papers in their own fields can be believed, with physicists and chemists expressing the most confidence.
Nature wrote that researchers often assume there is usually a “valid reason” for this belief, and the perception is compounded by a lack of incentive to publish positive replications. Moreover, medical journals may be reluctant to publish negative findings, according to Nature.
The survey presents a confusing picture at a time when there is growing importance attached to reproducing results.
For instance, a Merck executive recently suggested that drug makers should be entitled to get their money back for potential treatments licensed from universities if the company is unable to reproduce the results in subsequent experiments. In his view, the inability to replicate findings is reason for altering the nature of the relationship between academia and industry.
At the same time, the pharmaceutical industry has been under pressure to release trial results in order to verify claims about their medicines. The issue accelerated in the wake of several scandals about undisclosed side effects. Last year, a group of researchers analyzed trial data for a GlaxoSmithKline antidepressant and found the original safety claims could be not reproduced.
Turning back to the lab, the survey found that scientists cited several reasons for difficulties reproducing results. More than 60 percent pointed to the pressure to publish and selective reporting as regular issues. More than half cited insufficient replication in the lab, poor oversight, or low statistical power, Nature wrote. And some said the use of specialized techniques, for example, can be difficult to repeat.
To improve outcomes, the scientists said that better study designs and better statistics are needed, Nature wrote, adding that one-third of the respondents said their labs had taken steps in the past five years to bolster the odds of replacing results.
Often, even just studies of supposedly available is not reproducible because the authors hide behind potentially unavailable references instead of providing that data fully.
I meant to say: “Often, even just studies of supposedly available statistical data (i.e.: economic theories/tests)”…
What do you mean by unavailable references? If the data haven’t been published it is appropriate to use the citation “unpublished data”. If you are referencing another scientist’s data or methods that have been provided to you but not published it is unethical to describe them in your paper or.you can say “in publication” or even “personal communication” as long as you have permission.
In general academic studies are done in order to generate enough data for hypothesis testing. Most always they have too few subjects to generate statistically significant results and as such are grossly underpowered. This is why when Pharma licenses a new drug the payments are usually back end loaded with the largest portion reserved for when the results are confirmed in large scale powered studies. Other academic studies suffer from trial design flaws such as improper or no controls, lack of proper randomization and other measures that can produce biased results. Many academic biostatisticians lack the expertise that Pharma biostatisticians have in these areas. Rosenblatt expresses buyer’s remorse, and it’s quite symptomatic of the fact that there are so few promising drug candidates that companies routinely overpay for product candidates that don’t pan out in Phase 3.