Doctors across the U.S. have begun doing what once seemed unthinkable in a litigious health care environment: recording their medical conversations with patients and encouraging them to review the audio at home.
The rationale for the practice is as simple as the smartphone technology that enables it: having a recording improves patients’ understanding and recall of their doctor visits and helps them adhere to treatment regimens.
Now the increasing power of artificial intelligence is promising to bring this technical capability to a new level — potentially offering big rewards, and risks, for patients and caregivers.
Tech giants such as Google, IBM, and Amazon, among others, have developed speech recognition and machine learning technologies that allow doctors to automatically transcribe audio recordings, making it easier to upload them into electronic health records and mine them for insights about specific diseases and the most effective communication practices.
But that capacity, which could improve care and revolutionize burdensome record-keeping practices, also carries thorny questions about who owns the data and how it’s used, and whether the underlying information could be deployed in ways that doctors and patients haven’t anticipated. For example, if a third-party vendor of the technology gains access to vast stores of patient data, could it then use it to target advertising or allow the informations to be shared, intentionally or unintentionally, with other organizations pursuing their own motives?
“We really need policies and regulations to be clear on this,” said Paul Barr, a researcher and professor at the Dartmouth Institute for Health Policy and Clinical Practice. “One person or a small research group in a single institution can’t think through all the possibilities and pitfalls. We need to convene a broader group of stakeholders from all walks of life.”
Barr is leading a project at Dartmouth to create an artificial intelligence-enabled system that allows for the routine audio recording of conversations between clinicians and patients. The project, known as ORALS (Open Recording Automated Logging System) is designed to use natural language processing to automatically tag elements of the conversation deemed most valuable for patients. Patients can review the transcribed text of the visit and tap sections marked as “diagnosis” or “medication protocols” to review important details.
Dartmouth is planning to begin testing the system on patients this summer. Earlier this week, Barr co-authored an editorial that appeared in the BMJ that referenced the potential of the technology to improve care and reduce the paperwork burdens that contribute to caregiver burnout.
“When talk can be transcribed into an accurate digital record that in turn could be automatically coded, the potential of abandoning the keyboard could bring back some sanity to a clinical process that has become weighed down by data entry,” the editorial stated.
The power of a simple recording
It is unclear how many U.S. health care providers are recording clinical conversations on a regular basis. But the practice is growing in concert with demand from patients who are more frequently asking doctors to record conversations with their smartphones.
Some institutions engaged in the practice are keeping it purposely low-tech and away from artificial intelligence applications — at least for now.
In Galveston, the University of Texas Medical Branch donates digital recorders to patients to allow them to tape their visits and share them with family members and other caregivers. Meredith Masel, director of the Oliver Center for Patient Safety and Quality at the university, said the approach gives patients the benefit of the recording without creating confusion over the ownership and control of the information.
“It’s not university property once it’s given to the patient,” Masel said, adding that her role is purely to help patients recall their conversations and improve care, not to review the data for research purposes. “It started as a very simple partnership here with our oncologists and a box of audio recorders.”
The university started recording patient conversations in 2009. The service was seen as particularly beneficial for cancer patients, because of the stress and complexity of their medical conversations.
Since then, Masel said, the practice has spread to family doctors, geriatricians, neurologists, and other physicians; the university has also created a code in the electronic health record to note when patients are recording their conversations. Masel did not have a tally of the number of recordings, but said she gets about 10 notifications every two weeks of patients joining the program.
“We have a large and complicated medical system, and walking away with a recording can empower someone to feel more in control of their own care,” Masel said. “When you have to follow up and make the decisions, and the buck stops with you, this information is extremely valuable.”
Enter Google and Amazon
Audio recording of patients has been a topic of research since the 1970s, with studies showing that the practice improves patient comprehension and recall, and results in better treatment adherence. Still, recording is not widespread, in part due to the difficulty of sifting through lengthy clinical conversations.
The advent of new artificial intelligence products promises to make recordings easier to use. Last November, researchers from Google wrote that the company could use its automatic speech recognition technology to automatically transcribe medical conversations with relative accuracy. Using different speech recognition models, the researchers said they achieved a word error rate between 18 and 20 percent, which is considered respectable given the complexity of the information. They reported that both models performed well in comprehending “important medical utterances.”
The company said it would work with researchers at Stanford University to further develop the technology and help reduce the amount of time clinicians must spend typing information into medical records.
Amazon also sells an automatic speech recognition technology that can be customized for a variety of uses, and the company’s Echo products could potentially be used to install ambient listening capabilities in health care settings.
At Dartmouth, the ORALS system allows patients to record conversations on their smartphones, organize and review the transcript, and manage access for family members and other caregivers through the use of a secure server. The researchers are also seeking to add hyperlinks to the text to allow patients to learn more on specific topics from trusted sources.
“It allows them to build their own personal resource based on the recordings,” Barr said. “This can be one of the most intimidating things in the world. Folks need help finding credible information.”
While the technology offers clear benefits for doctors and patients, its development is outpacing policymakers’ efforts to address risks related to cybersecurity and patient privacy. Those risks are magnified by recordings in multiple ways:
First, patients could be carrying on their smartphones highly sensitive data that is only as secure as their ability to keep track of their phone. Because of that vulnerability, Dartmouth recommends that the information be stored on a secure central server where access can be controlled by the patient, not the health care provider.
Second, the compiling of this information by third-party vendors could create the opportunity for abuses, if the vendor or a client sold or shared the information for advertising, research, or other commercial purposes.
In the U.K., the National Health Service last year was found to have illegally shared 1.6 million patient records with DeepMind, Google’s artificial intelligence company. The risks are further underscored by the recent scandal involving the use of Facebook data by Cambridge Analytica.
“If you get enough greedy people in the mix who don’t care about people, you’re always going to get something a little messed up,” said Dr. James Ryan, a family medicine physician in Michigan who created his own system for audio recording patient conversations. He added, however, that this tension has always existed in health care and should not stand in the way of progress.
“The same [conflicts] are present in biochemical knowledge. It becomes a way to generate profit,” he said. “But biochemistry has also given us drugs that have saved thousands and thousands of lives a year.”
In their editorial this week, Barr and his colleagues argued for swift action to clarify the ground rules.
“It is time to develop policies on how to collect, store, manage, and share these resources and to maximize the value of the data to patients and to many other stakeholders,” they wrote. “Commercial entities, acting alone, are not the best custodians of data that have the potential for such enormous social good.”
I have too much trouble remembering patient portals for all my doctors & specialists alteady… give me paper copies.
A user of the Medicare systems
There are a couple of startups working on this problem as well. Recording the audio between provider and patient, transcribing voice to text in real time and then going one step further and standardizing the audio in the format of an HPI/SOAP note. One is based in Seattle called Saykara and the other is in CA called Suri. Still in the very very early stages of this though – NLP which is a commonly used technique in AI – is still very clumsy with respect to summarization. Even basic transcribing provided by Google and Amazon is also far from perfect as the article mentioned above – but it’s getting better. Exciting times!
Do you take testimonials?
Do you need volunteers
I think the group at UPMC in Pittsburgh has been working on something like this for awhile.
Comments are closed.