Talk about AI in medicine often focuses on the most exiting possible innovations like precision diagnostics, clinical prediction systems, and analytics-driven drug discovery. However, with the arrival of large language models like GPT-4, Bard, and LLaMA, there is growing enthusiasm for how AI might reshape the more mundane aspects of clinical practice: clinical documentation and electronic health records. And it’s obvious why. As a patient, I hate the experience of having to talk to my doctors as they peer at me just over the laptop screen (all the while typing furiously). It really takes the feeling of care out of health care. And, of course, doctors hate EHRs, probably more than patients do. There’s no end to the complaints about increasing documentation demands, poor interface design, and incessant alerts.
It’s no wonder that doctors dream of a hands-free world where a device — like a Dr. Echo — sits in the corner, listening to everything said and then auto-generating the clinical notes, discharge summaries, prior authorization letters, and so on. If large language models can help providers be more present and focused on patient care, then that seems like a clear win. This is exactly what Microsoft, OpenAI, and Epic are hoping for in their new AI EHR collaboration, already underway at Stanford, UW-Madison, and UC San Diego.
Nevertheless, it’s important to take stock of what could be lost in this technological transition. While I’m often a patient who hates the experience of talking to my doctor through a laptop, I’m also a researcher who has devoted a large part of his career to better understanding what happens when new technologies are added to clinical spaces. From this perspective, I see three major challenges that have to be overcome before large language models can really serve as clinical scribes.
Create a display name to comment
This name will appear with your comment