Skip to Main Content

A new report examining the first decade of study results being reported on ClinicalTrials.gov finds that there has been slow progress among drug companies and academic research centers in reporting the results of human studies, but the quality of the data may still present a larger problem.

When the database was expanded through federal law in 2008 to accommodate clinical trial results, an average of two trials per week posted results, even though 79 trials were completed every week. In 2018, that figure rose to 68 trials with results posted out of 135 completed trials, according to the research, published Wednesday in the New England Journal of Medicine.

advertisement

“It’s a bit disappointing that it’s not higher,” said Georgina Humphreys, clinical data sharing officer for Wellcome Open Research in the U.K., who was not involved in the study. “I thought it might be higher because of the legal requirements that are there.”

Previous reports have also found that adherence has been less than stellar.

A 2015 STAT investigation found, for example, that most research institutions, including Stanford University and Memorial Sloan Kettering Cancer Center, were routinely failing to report clinical trial results as mandated by the Food and Drug Administration Amendments Act (FDAAA), which helped establish ClinicalTrials.gov. The law says that trial sponsors that are hoping to have an FDA-approved drug or device on the market and are based in the U.S. (or have a trial site in the U.S.) have up to a year after a trial’s completion to post results on the website, though there are some exceptions allowed.

advertisement

The STAT investigation found that results from academic institutions arrived later than the one-year limit, if they did at all, 90% of the time, compared with 74% for industry.

Another analysis earlier this year found a similar trend — fewer than a third of studies that were supposed to be reporting their results were actually doing so. And it’s not just a problem in the U.S. — research institutions and universities in the U.K. are also not reporting trial results regularly.

The NEJM paper goes beyond these failings and digs deeper into how consistent institutions are with the data they upload.

“Many of the initial submissions have nonsensical elements to them,” said Dr. Deborah Zarin, director of the Program for the Advancement of the Clinical Trials Enterprise at Harvard Medical School and lead author of the new report.

Only after a successful review by staff at the National Library of Medicine — which helps maintain ClinicalTrials.gov along with the National Institutes of Health — do data get posted to the website. In the report, researchers took a look at some of the reasons why submissions were rejected and found that scientists were often inconsistent when it came to what they said they measured and the units they used when describing the data. Others didn’t include clear time frames for when data were measured or included multiple time points without proper explanations. Since 2017, those conducting trials are now required to report mortality from any cause, for instance, but of the 160 trials that fell under this rule, only 47 included a table showing these statistics.

“We talk about these very poor-quality records being a disturbing reflection of a clinical research enterprise that has trouble producing results from clinical trials on human beings, when the whole point of doing the trial is to produce those results,” said Zarin, who is also a former director of ClinicalTrials.gov. At the same time, she said, “It’s a churn in the system. It just uses up staff time at the NLM as well as presumably [sponsors’] own time.”

Consistent with previous findings, however, industry fared better during the reviews than academic centers. At first review, 31% of industry trials and 17% of academic trials were in good enough shape to be posted to the site. Even among those that register 20 or more trials per year, 16%- 77% of industry trials got through during the first review. The same range for academia was 5%-44%.

“Academia doesn’t have a lot of resources to dedicate to this,” said Jennifer Miller, a bioethicist at Yale University School of Medicine, who wasn’t involved in the study. Miller, who founded “Good Pharma Scorecard,” an index that ranks new drugs and pharmaceutical companies on a range of criteria, including transparency, added that companies often have full divisions, often with software to automate reporting, at their disposal.

But the lack of quality data is undermining efforts at transparency, according to Zarin. The study found, for instance, that ClinicalTrials.gov is often the only place that any results from trials are shared. Of a sample of 380 trials, 58% didn’t publish their results in a journal by the end of a year’s follow-up. And there is reason to believe that sponsors may not be reporting the same data on the database as they do in publications: Among the 47 trials that reported patient deaths, for instance, the authors counted up 995 deaths. When these same trials were later published in journals, they had reported 964 deaths.

“People say that publishing in an open-access journal in itself is sufficient,” Humphreys said. “But there’s an argument to be made for publishing on registries because there are inconsistencies between what’s reported in publications and what’s visible on trial registries.”

Data posted on ClinicalTrials.gov may provide a more complete picture of what a trial measured than a journal article may allow for. And having all the data available could also have positive implications for future trials from those not directly involved in the work.

“We’re starting to see researchers use data from other trials to then use fewer participants in their own trials,” as a cost- and time-saving measure, Miller said. Projects like ePlacebo and the Datasphere project have pooled placebo or even intervention-arm data from other trials for researchers to use in new trials, she said.

Outside the research enterprise, patients, their families, and providers also rely on the information on the website, to know which trials are underway but also to check information on the effectiveness or downsides of a given therapy.

“We owe it to patients participating in research to make their data publicly available,” Miller said. “Most of them want this.”

The FDA, NIH, and other regulatory agencies do have the power to enforce reporting. The FDA, for instance, can collect a more than $11,500 fine for every day that a clinical trial sponsor fails to adhere to regulations outlined in the FDAAA. The NIH can withhold funding for academic sponsors who receive grant money but fail to report results. But Zarin and Miller said they are unaware of any such actions ever having been taken.

Other strategies that could — and have — worked: Naming, and shaming, sponsors who fail to routinely report results. A year after STAT’s 2015 investigation, for instance, the NIH found an overall improvement in trial reporting. To that effect, starting in January, ClinicalTrials.gov is going to show the NLM’s review findings. Researchers will still have to correct any errors and resubmit data, but the public will see poor-quality submissions, according to Zarin.

On the flip side, Zarin and Miller also advocate for what they call “naming and faming” those who do well in their reporting. “It’s really important to give them credit as well because [attention from others] will likely be more common than any formal enforcement action,” Zarin said.

Miller said that “faming” sponsors is a constructive way to inspire change, especially since progress will happen slowly. “It takes a long time to change an entire sector of clinical research,” Miller said. It took a long time to require clinical trials to register and another 10 years from then to require posting results on ClinicalTrials.gov. “The fact that things are getting better is great, but things change slowly in our country,” she said.

Correction: An earlier version of this story misstated the fine amount that the FDA can collect from clinical trial sponsors.