If researchers were better at forecasting the results of clinical trials — and, say, could avoid having to run trials that will inevitably fail — more resources could be devoted to trials that might succeed.
But, it turns out, researchers might not be great at determining the likelihood of a trial’s success.
In unpublished research, McGill bioethicist Jonathan Kimmelman and colleagues asked cancer experts to forecast the probability of more than a dozen clinical trials hitting their primary endpoint. They found that the predictions overall were not very accurate, and, if anything, were too pessimistic.
Kimmelman presented his research last week at Harvard Medical School and spoke to STAT afterward about the importance of forecasting in clinical trials. The conversation has been edited and condensed for clarity.
Why should you think about predicting outcomes in clinical trials? Isn’t that why you run them, to see if something works or not?
It may be the case — and I’m not saying it is — that the outcome of a trial was in fact foreseeable among experts, in which case there was no point in running the trial. We should have just listened to expert advice. Part of what this research is about is seeing the degree to which the trials that we’re doing are operating in that sweet spot of uncertainty. Are there too many trials for which the results were predictable? If the answer is yes, then it means maybe we ought to be shifting our resources to the trials where the results were unpredictable.
And if you are a funder or investigator, you have to make decisions about which trials you fund or put your name on and pursue, and that’s going to depend on your ability to rank trials in terms of their degree of clinical promise.
In one of the trials that you looked at as part of your study, there was a lot of pessimism among experts that this drug would meet its primary endpoint, but it actually was a success. Does that negate the idea that you should be predicting these things or does it mean we should be better about predicting them?
I find it disturbing when the community seems to have already made a decision, and the community of experts is wrong. You like to think we can rely on experts for advice. But we’ve asked people about 15 different trials, and for some of those trials, the results might be completely predictable, and for others, the results are completely unpredictable, and that’s why they’re running the trial. Maybe that trial was truly an unpredictable trial.
Also, when you say there’s a 20 percent chance the trial is going to be positive, you’re saying it’s kind of unlikely it’s going to be positive. But 20 percent of your belief is being held on reserve for it being positive. And so it might very well be with that trial, maybe the experts were right — this trial was within that 20 percent of belief.
It’s early days in your work, but if researchers are not great at predicting outcomes, and are too pessimistic about the likelihood of success, at least in these trials, what does that mean?
There are a couple different meanings, and it’s early on in this research. One is, when we talk about informed consent with patients, we want to be able to give people accurate information about the prospect of benefit, and so your ability to have a good discussion about risk and benefit is only as good as your ability to accurately perceive risk and benefit. You don’t want to be overly optimistic and you don’t want to be overly pessimistic.
I found it surprising that your cohort of cancer experts were overall pessimistic about the likelihood of these trials hitting their endpoints, especially because there is a sense that there’s a lot of hype in the cancer field. Did that surprise you?
It was surprising to me. There’s a perception that the expert community is overly optimistic about the new and flashy treatments coming down the line. But what our data seem to suggest is that people have internalized just how hard it is to beat the standard of care in clinical trials. It may be that when we talk about optimism, we’re fixating on the small group of individuals who are proponents of a particular therapeutic method, but if you embed them in the larger community of experts, the larger community may be very realistic, if not overly pessimistic.
In a study you did that came out in June, you found that researchers were overly optimistic about the reproducibility of preclinical findings. This one found people were pessimistic about clinical trial results. Do those relate to each other? How do you think about that?
The two studies are looking at really different phenomena. In one case you’re asking clinicians how likely it is these emerging treatments are going to surpass treatments with which they have experience. In the other, you’re asking experts who may do experiments in related areas, but not identical experiments, to form a judgment about how much they believe a previous report. I think what the preclinical study was asking was a different question: How much do you trust the prior evidence that came before? Versus the clinical question, which is, how much do you believe this emerging treatment is going to surpass the standard of care? I think that what the preclinical study may be pointing us to is that researchers have a very difficult time interpreting the implications of preclinical reports.
Let’s say you found some people who are really good at forecasting trial outcomes. Then what?
That would be the dream, super forecasters. If you could find properties of individuals or groups of individuals that are really, really good at articulating their uncertainty in a manner that’s accurate, those are the people you would want to go to for advice. Is this a useful trial to run or do we already know the answer to this? You would want to rely more heavily on people who have a clearer picture of future events.
Also, your decision to run a trial isn’t solely based on the probability. It’s also based on the utility — you might say this trial only has a 10 percent chance of being positive, but if it’s a neglected disease and there aren’t a lot of candidates out there and if the risks are low enough, it might still be worth doing the trial.
Have you talked to people in the biopharma industry about this?
I’ve had very cursory discussions with people in the private sector but I have not gone deeper into conversations with either the investment community or biopharmaceutical community about these techniques. I would love to have the opportunity to work with a company to see how well companies are able to predict outcomes, to test whether they’re more realistic than academics, say, but you’d have to get a company to be willing to allow me to ask their employees.