Skip to Main Content

BETHESDA, Md. — Could an official database of experiments on lab rats help medical science overcome some of its most pressing problems?

Some researchers are taking an interest in the idea, which is being suggested by Food and Drug Administration Commissioner Robert Califf as a counterpart to a database that already gathers information on clinical trials. But the concept would face some tough questions if government officials were serious about pursuing it — especially about whether early, exploratory research should really get that kind of scrutiny.

advertisement

Speaking at an event here Thursday about the the challenges of reproducing major research results, Califf floated the idea of “something like a ClinicalTrials.gov for preclinical work.”

The goal, he said, would be to gather data from the preclinical research that involves either lab animals or cells growing in lab dishes.

“One of the problems that we have at the FDA in this regard is that trade secrets at the FDA are protected by the law. And I’ve always felt that one of the most important things that could be done, if it could be, would be free up all the data from the drug development process that is never seen by anyone,” he said at the event hosted by the National Library of Medicine.

advertisement

“Because there is no compelling reason to publish them. If they’re successful, they become trade secrets. And if they’re unsuccessful, they get dropped and no one cares if they’re published,” he continued. “If you think reproducibility and the fabric of science, in some ways, if it were possible, it would really good to have something like a ClinicalTrials.gov for preclinical work.”

That website collects data that research universities, hospitals, and drug companies are required by law to report to the federal government on the human studies they conduct for new treatments. The government is allowed to fine researchers who fail to comply, though a recent STAT investigation found that many scientists failed to report clinical trial results and yet faced no financial penalties from the federal government.

There is no similar database for the preclinical research using lab animals or living cells that lays the groundwork for the later trials involving humans. But such a repository could be useful, Califf suggested, to help solve two interconnected problems plaguing biomedical research: lack of transparency and an inability to reproduce blockbuster research results.

Much of the initial research used to justify the development of new treatments cannot be replicated later on, casting doubt on its validity. There is also a widespread reluctance to publish research that shows negative or negligible results, despite their scientific value, because grant money and professional prestige are predicated on publishing research that shows significant positive findings.

It’s a real problem: One former pharmaceutical industry researcher said in 2012 that his team tried to recreate 53 “landmark” preclinical cancer studies that were published in major research journals and came from well-respected institutions.

They were able to replicate the results in only six of them.

Califf’s concept of a preclinical version of ClinicalTrials.gov would help shed light on that earlier phase of research, which is often used to justified multimillion-dollar investment in human research.

“Knowing what the dead ends are would make things more efficient for the system as a whole,” said Stuart Buck, who works on research integrity at the Laura and John Arnold Foundation. Researchers would know what experiments had already been done and how and what they had found. They could then avoid needless work if an earlier experiment proved fruitless, or build on previous research that was successful or try to verify earlier studies themselves.

The clinical-trial database provides some sense of the success rate for human research, Buck said, and therefore a picture of what’s working and why. If it shows that 20,000 trials have been started and only 4,000 were published on time, then the broader research community and the public know that 16,000 weren’t.

“The problem with other areas of research is I don’t even have a way of making that statement … I just don’t have any way of seeing what the denominator is, but that itself is the problem,” Buck said. “That just to me indicates we need better rules about disclosure and at least registering what experiments are going on.”

It wouldn’t necessarily solve the problem by itself, though. Elizabeth Iorns, founder and CEO of the Science Exchange — a company that aims to encourage research cooperation among scientists — said that while Califf’s idea was “interesting,” the bigger problem is that there is very little funding available for the replicative research that is needed.

“There’s no funding for it. Nobody wants to do it. No one publishes it,” she told STAT at the National Library of Medicine event. Her firm has undertaken such an effort for prostate cancer and other areas.

Iorns also raised an objection that Califf’s suggestion could face if the FDA or NIH tried to put it into practice: Preclinical research is often exploratory, without a clear objective, as opposed to clinical trials that have firmer parameters and metrics for success.

“Exploratory research is exploratory. You shouldn’t preregister it. You shouldn’t have a statistical analysis planned,” she said. “That would be crazy. It would stifle so much innovation and serendipity that just happens.”

Califf seemed aware of that, saying he floated the idea when he was working at Duke University.

“People’s hair caught on fire and they ran out of the room at the very idea,” he said.

Califf didn’t suggest that he had a detailed plan for implementing the idea. But that would be one of the many issues that the concept would face — along with whether Congress is open to the idea and, if not, what tools FDA or NIH have to encourage such disclosure.

But Califf plainly felt that disclosing that information in a systematic way could help science solve the reproducibility problem.

“It would seem like it would be important to know all the studies that failed, as well as the one that succeeded,” he said, “if you really wanted to understand the reproducibility.”