Skip to Main Content

Researchers at the University of Wisconsin-Madison have spent years making sure that their meditation app, called the Healthy Minds Program, passes clinical muster and delivers positive outcomes. Designing studies to test the app’s efficacy led Simon Goldberg, an assistant professor at UW, to confront the mountain of thousands of studies of different mobile mental health tools, including apps, text-message based support, and other interventions.

Researchers had taken the time to synthesize some of the studies, but it was hard, even for someone steeped in the science like Goldberg, to draw definitive conclusions about what works and what doesn’t. So Goldberg teamed up with a few other researchers and took a step back to see if they could put order to the work collected in these meta-analyses — a kind of deep meditation on the existing research inspired by UW’s meditation app.


The meta-review, published on Tuesday in PLOS Digital Health, examined 14 meta-analyses that focused specifically on randomized control trials for mental health interventions, including treatments for depression, anxiety, and smoking cessation. In total, the review included 145 trials that enrolled nearly 50,000 patients. The review found universal shortcomings in study design, leading the researchers to write that they “failed to find convincing evidence in support of any mobile phone-based intervention on any outcome.”

It’s a provocative claim that hints at the work ahead for an industry garnering billions of dollars in investment to develop products that can help people more easily manage health conditions from their phones. Researchers and entrepreneurs are hoping to collect enough evidence to prove to health care policymakers and the public that their interventions work.

But it’s also a sign of how nascent the industry still is, and how even scientists and companies committed to rigorous evaluation are still sorting out what a good trial of an app looks like. Goldberg told STAT that though the evidence isn’t strong yet, better studies will surely emerge for some of the more promising interventions.


“I would bet the farm that if you wait five years and people keep running these trials, there will be convincing evidence,” he said.

To analyze the pool of studies, the researchers identified 34 different combinations of study criteria: the population targeted, the type of intervention, the control, and the outcome sought. The researchers gathered the effect sizes for these studies and then graded the evidence based on the number of factors, like the statistical significance of the results and the consistency of findings across studies.

That none of the interventions managed to show “convincing evidence” reflects that they weren’t able to satisfy a high standard, including an absence of publication bias, or the tendency to publish only favorable results, which was rarely assessed.

Eight of the interventions, however, were found to hit a slightly lower bar of having “highly suggestive” evidence — though that came with the caveat that the effect of the interventions and the strength of the evidence both “tended to diminish as comparison conditions became more rigorous,” the authors wrote.

None of those eight used control treatments that were designed to be therapeutic. Only one type of treatment, text message support for smoking cessation, was compared to an active control, meaning the comparison group received something to occupy their time and attention.

“For me, this suggests that mobile phone-based interventions might not be uniquely effective, but still are effective relative to nothing or non-therapeutic interventions,” said Goldberg. “Given the scalability of these interventions, that’s still good news.”

Rajani Sadasivam, a professor at the University of Massachusetts Chan Medical School who develops text-based services to help people quit smoking, agreed that researchers need to consider cost and reach. For example, a face-to-face, 45-minute counseling session to help someone quit smoking would almost certainly work better than a text-based intervention, but the text-based intervention will be easier for far more people to access.

Lisa Marsch, the director of the Dartmouth Center for Technology and Behavioral Health, told STAT that the new meta-review highlights the limitations of existing literature, including that researchers often don’t dig into variables that can impact outcomes and rarely reported adverse effects.

She said one of the downsides is that meta-analyses tend to group together interventions that may be quite different from each other, like a mobile app that delivers a “potent therapeutic approach” like cognitive behavioral therapy and another that provides inspirational messages or tips. Each might have different levels of clinical impact.

“Lumping them together into categories such as smartphone apps or text messaging apps loses sight of this,” she said. “This is quite distinct from something like a medication which… is more invariable across trials.” Another limitation of the study, as noted by Goldberg and colleagues, is that the paper leaves out evidence that hadn’t been examined in past meta-analyses.

Despite the somewhat dismal conclusions about convincing evidence and active controls, Goldberg sees the study’s findings as a positive sign of where the research is headed.

“Given how recent apps are in human history, there’s a ton of research on them, and there’s evidence that they’re yielding benefits,” he said. “To me, it’s super encouraging.”

Create a display name to comment

This name will appear with your comment

There was an error saving your display name. Please check and try again.