Skip to Main Content

I started my first job as an attorney in the fall of 2007, days after President George W. Bush signed the Food and Drug Administration Amendments Act (FDAAA) into law. As part of my firm’s FDA group, my job was to figure out what our clients needed to do to comply with FDAAA’s requirements for registering and reporting the results of their clinical trials on the government website at

The stakes were high enough to justify my hefty hourly rate — the prospect of an initial penalty up to $10,000 for not complying with the law’s requirements, followed by $10,000 a day if violations were not corrected within a 30-day period following notification of noncompliance. Institutions and investigators with National Institutes of Health funding also faced suspension or termination of grants and the possibility that their failure to comply with the rules would be considered in future grant applications. As it turned out, however, fear of enforcement was unfounded. A decade after FDAAA went into effect, the government has never levied a single monetary penalty or withheld research funding under the law. And let me assure you, that’s not because of perfect compliance.

As STAT recently reported, trial sponsors had disclosed only 72 percent of required results on as of September 2017, and 40 percent of those reports were made after the legal deadline. On the plus side, this reflects a positive trend, compared to 58 percent compliance two years earlier, prompted in large part by “naming and shaming,” as well as some attention from Congress and then-Vice President Joe Biden.


Nevertheless, even the upswing still leaves quite a bit of the glass empty: Results from more than 1 in 4 trials have still not been properly reported. The ethical consequences are substantial, and the government should be using its considerable enforcement authority to put an end to it. But it isn’t. was initially launched in 2000 after Congress passed a law requiring the NIH to create a public registry of clinical trials carried out to test the effectiveness of experimental drugs for patients with serious or life-threatening diseases or conditions. At the time, the idea was to facilitate the process for patients seeking access to trials that might help them. Soon thereafter, ethical arguments supporting greater transparency around clinical trials began to take root in earnest. In 2005, the International Council of Medical Journal Editors (ICMJE) began requiring as a condition of publication in its member journals that investigators register trials in a public database before enrolling the first participant. The goal of the editors was to help mitigate the problems of selective publication when trial outcomes conflict with sponsor or investigator interests, cherry-picking data through unplanned subgroup analyses, or switching endpoints or outcome measures midway through to ensure favorable results.


Although interest in registering trials at took off, the ICMJE mandate could do nothing to motivate the behaviors of those for whom publication in a prestigious medical journal is not the coin of the realm. And given medical journals’ interest in publishing results themselves, the ICMJE policy did not require public posting of study results.

In 2007, however, Congress made results reporting a legal requirement, at least for certain clinical trials. FDAAA greatly expanded registration requirements to cover essentially all drug and device trials beyond Phase 1. It also demanded public posting of a summary of trial results typically within one year of a trial’s completion, with certain exceptions providing more time when sponsors are seeking FDA approval of a new product or indication, or when other good cause exists. As noted earlier, the law also came with important new enforcement tools.

This legal development offered essential support for several ethical imperatives. Making study results publicly available helps maximize the value of study participants’ contributions to science, which, in turn, helps justify the risks, burdens, and uncertainties participants are asked to bear. It also helps future researchers design subsequent studies that build on what came before, avoid unnecessary duplication and waste of resources, and take important precautions to protect research participants from risks that became evident in earlier research.

When research is funded with tax dollars, there is further support for making the results easily publicly available so the public knows what it is paying for. Even if trials are published in medical journals, hefty paywalls can block access for patients as well as for many researchers and clinicians abroad. Academic journals are also notoriously unwilling to publish negative study results, leading to a biased record about medical products that may eventually be approved for use in patients.

In short, what FDAAA did was set the groundwork for a promising system of transparency and accountability with critical implications for participant protection, scientific advancement, and evidence-based medicine. But the government has failed to maximize its potential, leaving tens of billions of dollars in penalties uncollected.

In 2014, U.S. Rep. Leonard Lance (R-N.J.) asked the FDA about the record of reporting results on, which was then much worse. The FDA explained that it had relied on achieving “voluntary compliance in certain cases where we have identified apparent noncompliance and brought that to the attention of the responsible party. These interactions have increased awareness and resulted in compliance, without the need to assess civil monetary penalties. Significant efforts also have been devoted to providing assistance to stakeholders and to clarify the requirements of the statute, in order to encourage compliance.”

Fines may not be appropriate in all cases, but the reasons for refusing to levy any remain a mystery. NIH has confirmed it has sufficient resources and authority to enforce the law’s requirements, so that can’t be it. STAT investigative reporter Charles Piller hypothesizes that the NIH is concerned about disrupting important research and that the FDA may wish to avoid protracted lawsuits that might arise in the face of large penalties. Perhaps — but those reasons seem particularly weak in the face of such substantial and extended noncompliance.

In my opinion, a more likely explanation is that, at least until recently, the agencies responsible for enforcing FDAAA may have believed that its legal requirements weren’t clear enough to justify penalties for violations; officials at NIH have stated their view that “low compliance [is] due, in part, to the ambiguity of some statutory requirements.” Yet that’s unsatisfying, too, considering that when noncompliant parties have been publicly shamed for failing to report trial results, they’ve found the requirements to be clear enough to allow them to fix the problem.

When challenged on compliance issues, both the NIH and the FDA have said that their focus is on providing the regulated community with adequate information, tools, and assistance to understand and adhere to their reporting obligations. In this vein, the NIH issued new regulations governing in 2016, which became effective last year, to clarify and enhance FDAAA’s requirements. The NIH also issued a new policy requiring that every clinical trial it funds must register with and post results, even if not otherwise required to do so by FDAAA. As the rule was developed, NIH Director Francis Collins assured Rep. Henry Waxman that “when the regulations are in place, the clinical research community will be equipped to comply with the requirements, and the FDA will be able to enforce them more fully.” Now that the rule is out, the NIH has reiterated this sentiment, emphasizing that “going forward, investigators, sponsors, and the general public will be better able to evaluate what information is required to be submitted and, in general, whether compliance has been achieved.”

With this in mind, there should be no further excuses for failing to use all the tools the NIH and FDA have available to push compliance with clinical trial results reporting requirements to 100 percent. Given that fear of enforcement turned out to be unfounded over the course of a decade, it seems unlikely that the regulated community will respond to the new rules with the same level of concern I witnessed in my legal practice when FDAAA was first passed. Nonetheless, regulators now have an opportunity to reset expectations. The rules are clear, and the evidence demonstrates that the regulated community can comply when it wants to. If what gets measured gets done, what gets penalized gets fixed.

Holly Fernandez Lynch, J.D., is an assistant professor of medical ethics and assistant faculty director of online education in the Department of Medical Ethics and Health Policy at the University of Pennsylvania’s Perelman School of Medicine. She is also a senior fellow at Penn’s Leonard Davis Institute of Health Economics.

  • I like your idea, Holly, and I have another one for you. I think it is time to levy serious, harmful, penalties on physicians who do not report harmful Adverse Events, especially those that cause death or lifetime disability. I mean they must report the event no matter who is the cause. Whether the physician him/herself or a colleague. It’s time to END Preventable Averse Events (Medical Errors)!

Comments are closed.