Skip to Main Content

The mental health field is increasingly looking to chatbots to relieve escalating pressure on a limited pool of licensed therapists. But they’re entering uncharted ethical territory as they confront questions about how closely AI should be involved in such deeply sensitive support.

Researchers and developers are in the very early stages of figuring out how to safely blend artificial intelligence-driven tools like ChatGPT, or even homegrown systems, with the natural empathy offered by humans providing support — especially on peer counseling sites where visitors can ask other internet users for empathetic messages. These studies seek to answer deceptively simple questions about AI’s ability to engender empathy: How do peer counselors feel about getting an assist from AI? How do visitors feel once they find out? And does knowing change how effective the support proves?

advertisement

They’re also dealing, for the first time, with a thorny set of ethical questions, including how and when to inform users that they’re participating in what’s essentially an experiment to test an AI’s ability to generate responses. Because some of these systems are built to let peers send supportive texts to each other using message templates, rather than provide professional medical care, some of these tools may fall into a gray area where the kind of oversight needed for clinical trials isn’t required.

Unlock this article by subscribing to STAT+ and enjoy your first 30 days free!

GET STARTED

Create a display name to comment

This name will appear with your comment

There was an error saving your display name. Please check and try again.