
It’s one of the most seductive ideas in medicine: that “real-world evidence,” including data from electronic health record systems and even records of insurance payouts, could replace the far more expensive and time-consuming studies currently considered the gold standard.
The Food and Drug Administration is required, under the 21st Century Cures Act, to explore this idea. And late last month, New York private health care company Aetion published the findings of a study in which real-world evidence was used to try to replicate the results of a specific randomized, controlled clinical trial.
Did it work? It depends on who you ask.
Aetion’s co-founder, Dr. Sebastian Schneeweiss of Brigham and Women’s Hospital, argued that the study was “quite an achievement.” “It has elevated the conversation from ‘real-world evidence is all bad’ to ‘let’s have a differentiated conversation about this, maybe there is something really good about this,’” he said.
But several other experts who reviewed the data had a different reaction, with two saying no amount of new information would convince them that Aetion’s approach is workable. One of them called the attempt “dangerous.”
The FDA, which funded the study, came down somewhere in the middle. An agency spokeswoman said there is a “stronger scientific justification” for randomized controlled trials, but that “recent efforts to use rigorous design and statistical methods” might lead to a greater chance of obtaining valid results with real-world evidence.
The FDA has contracted with Aetion and the Brigham to try to duplicate the results of 30 completed randomized trials. The agency has also challenged Aetion to duplicate seven randomized trials that are currently underway.
The new data, however, come from a separate attempt by Aetion researchers — one in which they initiated a pilot attempt to replicate the CAROLINA study, which was being run by Boehringer Ingelheim and Eli Lilly to compare their diabetes drug, Tradjenta, to an older treatment, glimepiride. The pilot was funded by the FDA and the Brigham.
Results of the clinical trial have not been published, but they were presented at the annual scientific meeting of the American Diabetes Association on June 10. The data showed Tradjenta was “non-inferior” when it came to reducing the combination of heart attacks, strokes, and cardiovascular deaths.
Aetion came to the same conclusion based on its use of real-world data.
But there was also a difference. While both the Aetion study and CAROLINA showed that Tradjenta was also non-inferior in terms of episodes of hypoglycemia, or low blood sugar, the reduction in hypoglycemia was bigger in the clinical trial than in Aetion’s prediction.
The FDA spokesman said the agency and the researchers “will closely examine the full results of the trial when they are publicly available.”
Dr. Robert Califf, a former FDA commissioner and Duke University professor, said that Aetion “does careful work” and that the paper “looks solid.” He also said, as he has in the past, that he favors an approach to research whereby randomized controlled trials are conducted more cheaply and quickly by building the capacity to conduct them into electronic health records.
But Dr. Steven Nissen, who is the chief academic officer of the Sydell and Arnold Miller Family Heart and Vascular Institute at the Cleveland Clinic, was less complimentary.
“I didn’t know whether to laugh or cry when I read this,” Nissen said. “The fact that they got the right answer doesn’t mean it’s good research. And it’s not a good methodology and it’s not a substitute for careful, thoughtful prospective clinical trials,” he said.
“Is it useless? It’s not completely useless as a hypothesis-generating approach, but it is certainly not something that ought to be used for regulatory decisions,” he added. “And it’s certainly not the type of study that should be used to make clinical decisions. Full stop.”
A top diabetes expert with similar views, Dr. David Nathan, said real-world evidence simply cannot supplant traditional clinical trials. In his view, real-world evidence can only correct for biases that researchers already understand. By randomly assigning patients to one treatment or another, clinical trials rely on chance to cancel out any biases, whether researchers are aware of them or not.
“We have to be really careful if you want to supplant what has really generated huge amounts of important data,” said Nathan, director of the Diabetes Center and Clinical Research Center at Massachusetts General Hospital.
How many times would Aetion have to replicate a clinical trial before Nathan believed the results? “Infinite,” he said.
Nissen had a similar answer as to whether he’d ever be OK replacing randomized controlled trials with observational data. “Absolutely not,” he said. “One hundred percent not. It’s dangerous. How often have we been misled by observational research over the years?”
Still, Aetion’s study is particularly interesting because it represents an area where real-world evidence might be useful: testing the safety of diabetes drugs.
After a firestorm of controversy erupted around the diabetes drug Avandia, the FDA mandated that drug makers had to conduct large clinical trials to tell if their medicines might cause heart attacks. Some in the industry have griped that these trials are slowing the development of new medicines; they have also resulted in proof that some new medicines, such as Lilly and Boehringer’s Jardiance, prevent heart attacks.
If it were possible to use data from insurance databases to monitor a heart drug’s safety, it might represent a solution. And Aetion’s study was far faster than the CAROLINA study. It took six weeks to complete, but required four years of insurance claim data on the two drugs.
CAROLINA took eight years, so being able to use “real-world evidence” could cut the time to get data in half — at a considerably lower cost.
Dr. Harlan Krumholz, director of the Center for Outcomes Research and Evaluation at Yale New Haven Hospital, said he worried that using insurance claims, as Aetion does, faces substantial challenges. But he disagreed with the idea that observational data couldn’t ever fill the role of clinical trials in understanding diabetes drug safety.
He pointed to numerous examples, over the years, where his own observational studies have replicated the results from clinical trials. And he said that using electronic health record systems could make the process easier, and that it’s feasible to get good enough data from observational studies. “Once we have established efficacy, we simply cannot afford to do all these trials for safety,” Krumholz said.
Aetion’s chief executive, Carolyn Magill, is hoping there’s space for real-world evidence. “We’re not advocating for replacing randomization, and that’s really critical,” she said. “It’s just that there are also examples where we believe that data can credibly be used to come to the same result more quickly, and at a lower cost and less disruption to patients.”
Good luck, though, getting doctors to agree where real-world data should be used.
Feels to me that “big data” studies would be particularly prone to confirmation bias… since you need to sanitise EMR/insurance data – its all too easy to build your data cleansing to give the results you want/are paid for…
This is an old issue that we have faced for decades. When I was a Director of Pharmacy and P and T Committee and Chair. Companies would come in and present the studies and we would discuss that, that is not how we treat patients in our hospital. Should we see the same clinical outcomes and safety? I assume.
Like insurance companies, we could pull medical records and look at a baseline and the use the drug to see if we can duplicate the study results, etc. This was part of the reason we started meeting with a couple drug companies in 2003-2004 to discuss Risk/now Value Based Contracts. We wanted them to go at risk that we should see the same clinical outcomes they were presenting, with out any further safety issues in our patients. One time our outcomes measured were actually better than the drugs outcomes being presented to us by the company, but we had a focus on maximizing patient outcomes.
There is so much conflict-of-interest with this study. Why is Aetion being paid by the FDA to do this analysis that could be done by non-conflicted academic institutions or the FDA themselves? Aetion – in the end – is building a business to sell it’s services to pharmaceutical companies and Schneeweiss – although a leader in the field – is now super conflicted with his private venture. Similar things can be said about the deeply conflicted Flatiron-Roche-Abernathy connection.
Thank you for this great reporting STATnews!
Luddites rarely move mountains. CAROLINA presents an important insights as to how RWD, properly validated, allows the drug development, regulatory and patient communities to bring new medicines to market better, faster and less expensively. Synthetic trials aren’t about abandoning RCTs. They’re about advancing the frontiers of how we can and must continue to evolve the advantages of both more evidence and better information technology. Randomized Controlled Trials may be golden, but they are not perfect. If Steve Nissen thinks that such advances are “100% wrong,” then he is living in a 20th Century fantasy land. Alas, there is no approved therapy for fear of change. As Deming so famously reminds us, “Change is not required. Survival is not mandatory.”
1. Have you ever entered a code into an EMR? Or submitted an insurance claim? If you had, you’d know how random, incomplete, and tangentially related to the actual patient they are. Some diagnoses exist in the EMR to justify the test the doctor wants to order. Some to increase the complexity of the billing. Some as placeholders to discuss why the patient does NOT have the diagnosis.
2. This approach is an adjunctive one. It cannot form the sole basis of significant medical decisions. At best it could be seen as an unblinded retrospective case study. This approach has given rise to expensive and wrong medical care for decades. Arthroscopy, stents on asymptomatic CAD, lidocaine for MI and prevention of arrhythmias, and early endovascular procedures for stroke. Great case series, dead wrong.
3. As a taxpayer I’m irritated that this approach is mandated and funded. I see more room for error than improvement.
People excluded from RCTs: Children, the elderly, pregnant women, lactating women, people with chronic diseases, people with intellectual or physical disabilities, people with mental health diagnoses and those in organ failure. They’re supposed to be a representative sample of actual patients but this bias selects for a really weird demographic that represents a tiny percent of the people who will actually be prescribed the drugs. There’s a reason why antidepressants shown to work in RCTs often don’t translate to real world success.
We could collect data on 100% of prescriptions and outcomes for a fraction of the cost of clinical trials ($50k per participant). When you have longitudinal time series data, you can use onset delays and durations if action to infer causality. When you have data on a million participants many random confounding factors will cancel each other out. It seems absurd to say clinicians would be irresponsible to consider anything other that these unrepresentative 50 subject RCT’s.
Insurance claims data and electronic health records are not the same thing. One has a clear financial incentive for including or excluding particular CPT codes and diagnoses. The article jumps in between the two categories of data.
There is one big advantage – patient consent not needed. Using EHR removes the patient and the patient never knows.
I believe people also don’t appreciate that two trials, run simultaneously and with the same protocol, can also provide pretty different results due solely to natural variability.
Just because the RWE is different doesn’t make it wrong and the RCT right.
Also RCTs can give too rosey a picture given tight inclusion/exclusion criteria vs more representative populations reflected in databases.
And I say all this as a statistician who makes his living via clinical trials.
I’m all for accuracy and efficiency however, I didn’t see anything mentioned about using technology to monitor medication adherence and clinical oversight. Is this being captured any where?