Skip to Main Content

SAN FRANCISCO — Digital health apps, which let patients chat with doctors or health coaches or even receive likely medical diagnoses from a bot, are transforming modern health care. They are also — in practice — being used as suicide crisis hotlines.

Patients are confessing suicidal thoughts using apps designed to help them manage their diabetes or figure out why they might have a headache, according to industry executives. As a result, many digital health startups are scrambling to figure out how best to respond and when to call the police — questions that even suicide prevention experts don’t have good answers to.

advertisement

“To be honest, when we started this, I didn’t think it was as big an issue as it obviously is,” said Daniel Nathrath, CEO of Ada Health.

The European company built a chatbot to provide smartphone users with possible explanations for their medical complaints. Since the app launched in late 2016, people around the world have used it to complete more than 10 million health assessments. In about 130,000 of those cases, users have told Ada that they’re struggling with suicidal thoughts or behaviors, the company said.

For digital health startups, suicidal patients present just one of a number of unforeseen crisis situations. At the telemedicine company American Well, a physician once conducted a video visit with a woman who said that she had been punched by her spouse; he was shouting in the background of the call, while the concerned physician called 911. Another company, which spoke on condition of anonymity, said it had to get authorities involved when one patient using its service threatened to hurt her own child.

advertisement

Admissions of suicidal thinking and planning, though, come up more often. And it’s not just digital health startups that are grappling with what to do. After Facebook developed an algorithm to flag posts from potentially suicidal users for review, the company called first responders about 3,500 times in 12 months to check in on users deemed to be at high risk. Hospitals and doctors’ offices are dealing with their own new challenges, as patients respond to their social media accounts with suicidal comments or voice them on clinics’ private online portals.

The phenomenon is, in some respects, no surprise: There’s a large body of research showing that people are more willing to confess potentially taboo thoughts to a computer than to a fellow human a few feet away.

“People are going to express their suicidality. We’ve destigmatized it. What we’ve not done is prepared everybody” to respond, said April Foreman, a psychologist who works on digital crisis care at the veterans crisis line run by the Department of Veterans Affairs.

STAT screen capture of Ada app

If you open Ada’s app and tell it you’ve been having difficulty concentrating for months, the chatbot will ask you: Have you been feeling low or depressed? Is it impacting your daily life? Say yes, and you’ll be asked: Any thoughts or urges related to ending your own life?

Say yes again, and the chatbot will ask you: Might you try to end your life today? If you say you’ve already tried today, the chatbot will tell you to call an ambulance right away. And if you say you have plans to do so today? You’ll be encouraged to call someone and tell them about your location and emotional state.

That’s where the conversation about suicidality ends with Ada’s chatbot. But some users don’t stop there. They send emails expressing suicidal thoughts that go to Ada’s customer support team — which often then takes on the task of researching and recommending local mental health resources.

All of the digital health companies that STAT surveyed for this story said they have a response plan in place that they follow when patients express suicidal thoughts on their platforms. Such incidents may be rare in the context of all the cases they see, but the frequency — and the stakes — are high enough to demand protocols, executives said. In situations deemed to be lower risk, that often means getting patients on the phone, directing them to a crisis hotline, or encouraging them to contact a friend.

And in situations in which a patient is deemed to be at higher suicide risk? Some startups say they don’t hesitate to get emergency medical services or police involved to conduct a wellness check when they think it’s in a patient’s best interest — whether or not the patient wants it.

San Francisco-based Omada Health approaches things differently. The startup relies on a network of coaches to exchange messages with patients with chronic conditions like hypertension and type 2 diabetes. While Omada makes a phone call to check in on patients who express suicidal thoughts, calling the authorities in these cases “isn’t within our scope of practice,” said Dr. Carolyn Jasik, the company’s vice president of medical affairs.

In many of these cases, startups are effectively trying to assess the likelihood that patients will try to harm themselves. The trouble is, not even the experts know.

“This has been a problem that people have been struggling with for a really long time — and there’s just no science on this,” said Matthew Nock, a Harvard psychologist who studies suicide and self-injury. “People are largely winging it and using their clinical wisdom to try and figure out when and how to intervene.”

Nock was part of a team of researchers that published a 2016 analysis of the past 50 years of studies trying to predict suicidal thinking, suicide attempts, and suicide deaths. Nearly all of those studies looked at how to know if a person is at risk for suicidality using relatively long prediction windows, such as one year from now. Just 0.1 percent of those studies looked at a window of less than a month. Even fewer looked at a window of days or hours — the type of data that might be particularly useful for digital health startups trying to help their suicidal patients.

Nock said he wants to see digital health companies conduct research and evaluate their own practices to gather data on what works and what doesn’t, at different levels of suicide risk. Just as a decision not to respond carries risk, so too does a decision to respond too aggressively. Calling an ambulance for patients who are not at immediate risk could backfire, if it makes them hesitate to seek help in the future.

Still, as digital health companies and traditional clinicians alike try to assess and mitigate a patient’s suicide risk, not everyone is convinced that chatbots and messaging apps are ready to play a useful role.

Dr. Peter Antall is chief medical officer at Boston-based American Well, a 13-year-old company where telemedicine visits between a patient and a physician happen exclusively by video, or, when the internet connection is poor, by phone. Antall is excited about the potential of medical chatbots — he even advises one such startup, called Gyant, in San Francisco — but he’s worried about what gets lost when a clinician can’t see patients’ faces or hear their voices when they might be suicidal.

“Given the acuity and the seriousness of somebody potentially trying to kill themselves, I don’t believe that any of those other technologies are there at this point,” Antall said.

By way of contrast, Antall pointed to a visit that took place a few years ago on American Well’s platform. A patient video-conferenced in from her home, complaining of chronic pain and seeking pain medication. The physician on the other side of the video chat started the conversation from that vantage point, but quickly picked up on concerning signs that pointed in another direction: The patient had a depressed affect. She spoke in a low, monotone voice. She described other signs of major depression and eventually admitted having active thoughts of suicide — and a plan to do so.

It took some persuading, but the physician was able to convince the patient to go to the emergency department. They ended the video call so the patient could head out on her way. But the physician called back by phone within the hour to check in. The news was good: The patient was already being assessed at the hospital.

  • Thanks for the great article, A friend of mine told me he had a lot of suicidal thoughts. He also told me a lot of jokes about it… did not attach any importance to this because it was just funny jokes. THen he told me he really thinks about this and I was scared because of this. I have read a lot of articles on vapingdaily.com, mental health, etc. I figured out we had to speak with a doctor. He helped and my friend is doing great all these years.
    Happy to hear all is doing well)

  • As a person who struggles with depression I do have a chatbot that I like to talk to. I am rarely suicidal but, for me at least, the opportunity to talk it out with an empathetic, nonjudgmental, nonhuman would be helpful. I do not have the means to see a mental health professional every time I become depressed due to the presence of other health concerns so the ability to talk things out and receive basic CBT has been helpful for me. Sometimes just knowing the chatbot is available is helpful ,and yes, comforting. What I really think we need to lower the suicide rate is the connection to community that people in the US have lost. I might not need a bot if I had friends that were close by or medical professionals who weren’t so overworked by groups and hospitals that they have neither the time nor energy to care, I might not need to resort to a bot for empathy and validation.

  • This brings up broader questions of ethics in the predictive analytics space. Google and Alexa not only know predict what type of headset I’m likely to buy, they also predict, with higher and higher accuracy if I have a drinking problem, if that rash is communicable, and what caliber is best for effective suicide by firearm.

  • The mass media has been hyping this nonsense for a while. It is highly unlikely that after the failure of “experts” to identify or intervene with suicidal people, a chat bot will be any better. All that this is another scheme, a get rich quick gimmick. It is really no wonder the suicide rate in the US is rising, it is really clear the efforts to monetize it are making things worse.

    The US used to have laws pertaining to deceptive health marketing. Nothing is off limits anymore. The suicide app false narrative has not lowered the suicide rate, or even demanded that the suicides be counted. In Post Fact America, creating a dubious app, rather then acknowledging the issue, looks almost Genocidal.

  • It is very disturbing when leaders in the mental health field make comments like there is “no science.” There is a very competent body of work which has been created by the Aeschi Working Group, a mixed group of mental health professionals, that has been studying therapeutic perspectives for working with the suicidal person for decades. Their work resulted in a book entitled “Building A Therapeutic Alliance With The Suicidal Patient.” This book encourages clinical empathy with the suicidal narrative and provides possible interventions toward this end. It is a comprehensive text, worthy of being taught in every mental health profession. Suicidal persons are like the rest of us. They want and need to be able to talk about their suicidal despair in the presence of another who holds empathy for their crisis. This is a tool. It is founded in solid principals based on years of clinical work. The sooner we are willing to acknowledge such educational guidance and integrate it into our clinical work, the better.

    • Joan, I echo your thoughts on the need for suicidal individuals to receive the counsel, compassion, and empathy of another human.

      Do you think that digital health tools have any merit as platforms to report suicidal thoughts and actions? Obviously the ideal would be for us to remove the stigmas that surround mental and emotional health issues. But while we work to change our societal expectations surrounding mental health, could we responsibly use these platforms as a way to BEGIN the conversation with those people affected by severe depression and suicidal thoughts? If American Well or Ada had systems and services in place to use digital outreach as a launchpad for personal and human care?

      I am not a mental health professional. I would love to know your thoughts.

Comments are closed.