The promise and potential pitfalls of AI medical scribes
With the age of electronic health records, patient-provider interactions generally involve providers doing their best to balance the information they need to note on the screen and being there for the patient. A present bedside manner has become a more difficult balance to strike.
Enter: Medical scribes.
This job has offered many fast-typing pre-med students a valuable opportunity for low-wage clinical experience—and it’s given providers the opportunity to look away from the EMR screen more, as someone else takes care of the visit notes.
Now, medical AI is looking to supplement—or even replace—the work scribes do in clinical encounters.
About a year ago, Microsoft acquired AI-enabled medical scribe company Nuance Communications for $19 billion. Now, it’s integrating GPT-4 into the platform, which is already used by hospitals around the country to transcribe patient-provider conversations.
This summer, all providers using Nuance’s DAX or Dragon Medical One transcription tool will be eligible to apply to be early adopters of Dax Express—the GPT-4 enabled product which bypasses the use of a human reviewer for quality control. Nuance claims DAX Express can return full AI-generated visit notes within minutes.
Let’s dive into the upsides and critiques of this fast-developing application of medical AI.
AI medical scribes can help providers be more present with patients
The potential benefits of using AI transcription for patient encounter notes are a no-brainer: Providers can spend more time connecting with patients when they don't have to be staring at a screen.
In an interview with STAT, physician assistant and Nuance consultant said: “We have a saying here at Nuance that we want to ‘turn the chair around,’”
For many providers already using the Nuance product, the patient interactions are just the start of how their work lives have improved. AI transcription makes patient visit notes take up less time in a provider’s already packed schedule.
Since starting to use DAX, Hal Baker, chief digital and chief information officer at WellSpan Health reported that he has “had a couple of office managers say, ‘Dr. Jones ate lunch today. He hasn’t eaten lunch in 14 years.’”
Plus, the software enables providers to rely less on the scribing industry, which is riddled with worries over a lack of regulation, high turnover, and potential exploitation of underpaid workers.
“AI is not going to go to medical school or change jobs or break their ankle skiing and need a leave of absence,” said Tyler Haberle, associate chief health information officer at Intermountain Healthcare, a health system using DAX.
The question of accuracy
However, there are concerns about the quick implementation of this technology in clinical care. Namely, the lack of many guardrails to ensure accuracy and little independent oversight.
“There are things that still need to be put into place to get us closer to the reality of what maybe the promotional videos infer — that this is AI doing a lot of the work and then a human giving feedback and editing the draft — but we’re not there yet,” said Haberle.
Microsoft and Nuance’s announcement of their accelerated timeline toward entirely-AI-transcribes visit notes comes on the heels of less optimistic news about AI medical scribes. A study has found that AI-powered medical scribes are still missing the “mhms” and other non-lexical conversation sounds in their transcriptions of conversations. These bits of information are key to the accuracy of visit notes. Experts told STAT that—from their experience—Nuance’s technology still sometimes struggles to get it right without human help.
These aren’t inconsequential details. Any mistake in medical documentation can have dangerous consequence on patient care—from mix-ups in medical billing to the prescription of incorrect medications. And providers and hospital systems will be the ones responsible if things go wrong
The question, then, is how to regulate the use of this technology when there’s so much at stake in its accuracy. In the case of companies like Nuance, this proves to be a challenge. The developers of these platforms don’t disclose performance metrics or allow outside researchers to access and evaluate the models.
Our perspective: Balancing regulation and meeting an urgent need can be challenging
When it comes to patient safety, taking a company’s word for the accuracy of their product is sometimes all providers can go on—especially when it comes to medtech.
This is the question at the heart of the brewing debate about whether medical devices should be subject to placebo trials—to better determine whether new products are not just safe but also accurate and effective. However, for many medtech innovations—especially medical devices—placebo trials are a prohibitive, if not fully impossible, undertaking.
But what about when it comes to medical AI like AI scribes? The FDA seems to believe this area of medical innovation requires more oversight, given the recent move to increase medical AI regulation.
We welcome a pathway to more effective oversight of these emerging technologies—especially when the accuracy of patient care is at stake. However, we also recognize that the technology addresses urgent needs, which it would be a shame to delay addressing.
As we’ve discussed many times before, healthcare is a trust industry, and this application of medical AI may help increase trust in providers through better connection, decreasing physician burnout.
But we must also do our due diligence, ensuring these tools are trustworthy. We hope that medical AI developers will take the necessary steps to safeguard the accuracy of one of the most important tools in healthcare and continuity of care: patient visit notes.