here’s long been talk in medicine about the need to listen more to the patient voice — and now that mantra is being taken literally.
Academics and entrepreneurs are rushing to develop technology to diagnose and predict everything from manic episodes to heart disease to concussions based on an unusual source of data: How you talk.
A growing body of evidence suggests that an array of mental and physical conditions can make you slur your words, elongate sounds, or speak in a more nasal tone. They may even make your voice creak or jitter so briefly that it’s not detectable to the human ear. It’s still not absolutely clear that analyzing speech patterns can generate accurate — or useful — diagnoses. But the race is on to try.
The latest player to enter the arena is Sonde Health, a Boston company launched Tuesday by the venture capital firm PureTech, based on technology licensed from researchers at the Massachusetts Institute of Technology. Sonde wants to develop software for consumers that can screen for depression as well as respiratory and cardiovascular conditions.
“Speaking is something that we do naturally every day,” Sonde COO Jim Harper said.
The company will start by analyzing audio clips of patients reading aloud, but aims to develop technology that can extract vocal features without actually having to record the words. The goal, Harper said, is to “move the monitoring into the background and to collect some of that with devices that people already own.”
Sonde will have plenty of competition: IBM is teaming its Watson supercomputer with academic researchers to try to predict from speech patterns whether patients are likely to develop a psychotic disorder. A Berlin company has worked on diagnosing ADHD with voice recordings. Another Boston company, Cogito, is developing a voice analysis app that being used by the US Department of Veterans Affairs to monitor the mood of service members; it’s also being tested in patients with bipolar disorder and depression.
Even the Army is interested: Earlier this month, it launched a partnership with MIT researchers at the same lab working on the Sonde technology, with the goal of developing an Food and Drug Administration-approved device to detect brain injury.
The field is so buzzy that some entrepreneurs are rushing right into the consumer market, making bold claims with little clinical evidence. One team raised more than $27,000 on the crowdfunding site Indiegogo on the promise to put out an app, slated to launch this summer, that will analyze “voice patterns to help you achieve optimal health and vitality.” (The crowdfunding campaign also referenced plans to gather data on “frequency biomarkers” related to symptoms of cancer.)
But it won’t be easy to make vocal diagnostics clinically useful, cautioned Christian Poellabauer, a computer scientist at the University of Notre Dame who studies biomarkers for neurological conditions. It can be very difficult to isolate the real cause of changes in speech patterns, he said. Recordings must be of high quality to be useful, and that can be costly. And you need lots of data to ensure that correlations are reliable.
Then there’s the issue of cultural differences: While testing voice analysis to diagnose concussions, for instance, Poellabauer’s team found that many young athletes hesitated or changed their tone when saying the word “hell” — for reasons that may well have had nothing to do with brain injury.
“Speech is a very, very, complicated mechanism,” Poellabauer said.
Another crucial question: Just how useful the information will be for patients, and whether clinicians will be equipped to help them know what to do with it.
“If you take this app and it says you’re slurring your speech and having a stroke, that could be useful. You go to the hospital immediately. On the other hand, if it says there’s a 38 percent chance you’re going to have a migraine in the next week, I’m not sure that’s so helpful to you. You probably knew that anyway,” said medical ethicist Arthur Caplan of New York University.
Caplan also suggested such technology might be used to predict the likelihood of a patient flying into a rage or losing self-control — and turn those momentary lapses into a pathology. “Where’s the line here between what you want monitored and what you don’t?” he asked.
Critics have also raised privacy concerns, suggesting that voice analysis technology might become so sophisticated that patients could be identified by their cadence and tone, even if their names weren’t attached to the speech sample.
“I don’t think that right now we have the technology to figure out who a person is, just based on their voice alone,” said Cheryl Corcoran, a schizophrenia researcher at Columbia University who has collaborated with IBM Watson. “But that’s technology that may very well exist in the future.”
Meghana Keshavan contributed to this report.