Skip to Main Content

The clinical trial industry, which I work in, is in crisis.

Roughly half of clinical trials go unreported. Industry-sponsored trials are four times more likely to produce positive results than non-industry trials. And even when trials are reported, the investigators usually fail to share their study results: nearly 90 percent of trials on lack results.

Failure to report clinical trial results puts patients in danger. Here’s one example of that: GlaxoSmithKline, the maker of the antidepressant Paxil, recently paid $3 billion for failing to disclose trial data showing that Paxil was not only no more effective than placebo but was also linked to increased suicide attempts among teenagers. The effectiveness of statins, the Tamiflu anti-flu medicine, antipsychotics, and other drugs have come under question due to improperly reported data. Without complete disclosure of trial results, physicians can’t make informed decisions for their patients.


A recently passed final rule from the Department of Health and Human Services now requires that all NIH-sponsored clinical trials be reported on A complementary policy from the National Institutes of Health covers registering and submitting summary results information to for all NIH-funded trials, including those not covered by the final rule.

Unreported trials are subject to daily fines of $11,833. Researchers have 90 days after the rule is enacted on January 18, 2017 to comply with it. Excellent summaries of the rule have been published by the NIH and in the New England Journal of Medicine.


The final rule should help address some of the troubling trends in the clinical trial industry. It clears up ambiguous reporting requirements and explicitly requires investigators to submit clinical trial results, adverse events, and statistical methods. These are steps in the right direction that could limit the unscientific practices plaguing the trial industry.

But the final rule doesn’t go far enough, mainly because FDA lacks the staff and the political will to adequately enforce it. As STAT reported in December 2015, the FDA had never levied a single fine for clinical trial reporting violations. Representatives from the FDA cite legal complexities and lack of employees, yet critics have also pointed out the FDA is effectively on the pharmaceutical industry’s payroll. Under the Prescription Drug User Fee Act, the FDA supplements its budget by charging pharmaceutical companies drug application fees that totaled $855 million in fiscal year 2015.

The current FDA commissioner, Dr. Robert Califf, has said that the FDA will not be adding staff to enforce the final rule. That’s a mistake. How else can we expect the rule to be enforced? I work in a research group that conducts more than a dozen clinical trials and know firsthand that researchers don’t have the impetus to report their trials unless there are strong incentives to do so — like enforcement and the threat of fines.

In a perfect world, the FDA would receive more funding to hire employees so it could independently enforce this policy. In the meantime, researchers can check the reporting practices of their own institutions or sign a petition to support the Alltrials campaign. Another project called OpenTrials, a collaboration between Open Knowledge International and the University of Oxford Data Lab, aims to “locate, match, and share all publicly accessible data and documents, on all trials conducted, on all medicines and other treatments, globally.” It is seeking volunteers to contribute clinical trial data.

I know from personal experience that clinical trial reporting can be tedious and seemingly unrewarding work. But the transparent exchange of scientific data is integral to evidence-based medicine and public health. While the new final rule is a step in the right direction, the public and the research community also need to support efforts like AllTrials and OpenTrials.

Chris Cai is a clinical research coordinator at Massachusetts General Hospital in Boston.

  • Hi Adam

    Wow thank you for these contributions. I took a look through the “zombie statistics” post you linked to and found it persuasive. I love what you did with your point by point analysis of each of the sources cited by AllTrials (although I confess that I still need to read through all the journal articles cited to really pick apart the methods). Have you been in dialogue with the AllTrials campaign about this?

    Certainly, a bit of strategic rhetoric might be forgiven for any political campaign, but as you point out, it’s never good to fudge statistics to mislead individuals.

    If we can agree on the points you lay out– that the 50% figure is more a soundbite or historical artifact than an accurate representation of current reporting habits– I wonder what you’d recommend as a logical course of action.

    Should AllTrials endorse the arguments you’ve made ? From a real politik perspective, would this hurt worthy efforts to reform the clinical trial industry?
    Or might it be closer to the ideals of open science AllTrials was formed to support?

    Another question: is it necessary to resort to soundbites in this social media age dominated by 6 second videos and self-selective echo chambers? Perhaps AllTrials is aware of the evidence you point out and feel that political necessities make the 50% figure justified.

    Regardless, I think the points you make towards the end of your article deserve more attention

    “It is disappointing to see an organisation nominally dedicated to accuracy in the scientific literature misusing statistics in this way.

    And it is all so unnecessary. There are many claims that All Trials could make in support of their cause without having to torture the data like this. They could (and indeed do) point out that the historic low rate of reporting is still a problem, as many of the trials done in the last century are still relevant to today’s practice, and so it would be great if they could be retrospectively disclosed. If that was where their argument stopped, I would have no problem with it, but to claim that those historic low rates of reporting apply to the totality of clinical trials today is simply not supported by evidence.

    All Trials could also point out that the rates of disclosure today are less than 100%, which is not good enough. That would also be a statement no reasonable person could argue with. They could even highlight the difficulty in finding research: many of the studies above do not show low rates of reporting, but they do show that reports of clinical trials can be hard to find. That is definitely a problem, and if All Trials want to suggest a way to fix it, that would be a thoroughly good thing.”

    Will keep reading through the evidence you presented. Thank you! Looking forward to thinking about this more.

  • “Roughly half of clinical trials go unreported.”

    That figure is widely quoted, but it is a myth. Most recent studies typically show that only 10-20% of trials go unreported. That figure is still too high, of course, but it’s very far from “roughly half”.

    • Hi Adam

      When I looked through the literature, I found varying estimates. I ended up going with the 50% figure based on the studies cited at the end of this comment.

      I found these studies through the Alltrials FAQ page ( Although they clearly have an interest in identifying a problem in reporting, I found the evidence they cited convincing when I read through the primary literature . The Song et al systematic review was particularly thorough. AllTrial’s defense of the 50% figure is quoted below and can also be found on their FAQ page. Whatever the actual figure is, it seems to me that the effects of underreporting are extremely harmful .

      “Some recent individual papers have found a higher rate of publication. These studies have looked at small subsets of all trials, generally the most recent trials, from the past couple of years, on the very newest drugs, over short time periods. With all the campaigning, new regulations and emerging codes of conduct, we would hope and expect some of this improvement to be true (although one industry study on missing data also has several methodological flaws). However, all the evidence needs to be integrated before we can know whether there has been an improvement in transparency overall. Furthermore, the most recent trials represent only a very tiny fraction of the evidence that is needed to guide everyday decisions for patients today. Doctors do not practice medicine using only treatments, or trial results, from the past three years. We need all the results, of all trials from the past three decades, and urgently, because these are the trials that cover the treatments that patients use today. Because around half of all trials were not published over many, many years, we will have to uncover a large number of those older trials, for the percentage of all trials published to change significantly.”

      sources (as cited in AllTrials FAQ page)

      F Song, S Parekh, L Hooper, YK Loke, J Ryder, AJ Sutton, C Hing, CS Kwok, C Pang, I Harvey. Dissemination and publication of research findings: an updated review of related biases. Health Technology Assessment 2010; Vol. 14: No. 8

      Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis. Sim I, editor. PLoS Medicine. 2009 Sep 8;6:e1000144.

      Munch T, Dufka FL, Greene K, Smith SM, Dworkin RH, Rowbotham MC. RReACT goes global: Perils and pitfalls of constructing a global open-access database of registered analgesic clinical trials and trial results. Pain. 2014 Apr 13. pii: S0304-3959(14)00175-4. doi: 10.1016/j.pain.2014.04.007. Online:

      Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. The Lancet 18 January 2014 (Volume 383 Issue 9913 Pages 257-266 DOI: 10.1016/S0140-6736(13)62296-5) Online:’13_reducing%20waste%20inaccessible%20research_2014.pdf

    • Hi Chris

      I’m afraid that the All Trials statistics are somewhat cherry-picked. They ignore more recent evidence showing much higher rates of reporting.

      I’ve written a detailed critique of their statistics here:

      Since I wrote that, a further study has been published that found a 93% disclosure rate.

    • Hi Chris

      As a follow-up to my previous post, I see that although the link I gave in my last comment discusses the first 3 of those papers you cite and explains why they don’t support the claim that only half of all trials are published, I hadn’t discussed the last one (Chan et al 2014).

      So here goes.

      The Chan et al paper does not provide new data. It is in fact an opinion piece discussing a variety of aspects of reducing waste in research, of which publication rates are only one. They cite a whole bunch of studies of publication rates, but they give no details of their search strategy. This makes it hard to know whether they have really included all relevant studies.

      Many of the studies they present are old ones, showing publication rates from the 1980s and 1990s (and even one or two from the 1970s and 1960s). I think it’s pretty clear that publication rates were too low back then, but that doesn’t necessarily tell us much about what is happening in this century.

      But they also seem to present lower statistics from the studies than they could. For example Ross et al 2012 (their reference No 25 in their appendix) found that 68% of studies were published by the end of their study, though if you take a cut-off of 30 month after the end of the study, only 46% were published. They took the figure of 46% rather than 68% (and in the blogpost I linked to in my last comment I explain why even that 68% is likely to be an underestimate of the true disclosure rate).

      They also cite Bourgeois et al 2010 (their reference No 28 in their appendix), and say that it found a publication rate of 66%. In fact, that’s only true if you ignore results disclosure on websites. If you include that as well, then the disclosure rate that Bourgeois et al found was 80% (and bear in mind that Riveros et al have shown that results postings on websites are generally more complete than journal publications, so we really can’t ignore website postings as some kind of inferior alternative).

      Chan et al’s literature search is also no longer up to date. I know of 3 studies published since their paper that found disclosure rates of around 90%.

      So again, I don’t find the Chan et al paper convincing evidence to support the claim that only half of all trials are published.

Comments are closed.