AI Use Case: Messaging Encounter Documentation

Clinical-AI-human use cases are a continued area of focus for us. In a past blog post, we shared how Oscar built an AI integration to generate initial drafts of review documentation for electronic lab results with the goal of simplifying the more tedious aspects of virtual care providers’ work. Last year, we expanded our AI functionality to include a new use case: generating initial drafts for providers to document their secure messaging-based visits with patients.

Oscar members are supported by the 120+ virtual care providers in the Oscar Medical Group (OMG). OMG operates on top of Oscar’s in-house technology stack, including our internally-built Electronic Health Record (EHR) system. Across our virtual programs, patients have the opportunity to message their providers to quickly access care. Providers then use our EHR system to place orders and document their encounter with that patient. This documentation is done between visits with other patients and can take providers up to 20 minutes to complete.

With our expanded functionality, providers leverage AI to generate an initial draft of their documentation, which providers then review, amend, or discard. Oscar built in several safeguards to ensure providers properly review and edit the outputs as needed based on their clinical discretion. This allows the providers to benefit from AI-driven efficiencies while still prioritizing patient safety.

Our sample size is small given the feature only recently launched, but we’ve already seen a significant impact. The time it takes to document secure messaging-based visits has decreased by more than 30% since launch. This time savings accompanies strong positive feedback from OMG providers using this feature:

“It compiles a really nicely formatted note and categorizes relevant information into appropriate sections of the SOAP* note, saving me time in reorganizing what I am manually entering.”

“I am beyond impressed with the AI documentation, accuracy, and functionality. It really requires minimal editing and has saved me so much time.”

Our data support this anecdotal feedback regarding the accuracy of the AI-created draft documentation. In a small representative sample, we’ve seen:

  • 35% of the time, providers made very few changes outside of required edits to indicate they had reviewed the initial draft in its entirety

  • 12% of the time, providers made minor changes that did not materially change the content of the AI model’s initial draft, such as adding a greeting or sign off to patient-facing instructions

  • 47% of the time, providers added specific data points about a patient that our AI model had no knowledge of, such as specific lab results or their next follow-up appointment

We’ll continue expanding our clinical-AI-human use cases in our virtual care programs, enabling our providers to spend more time caring for patients and less time on tedious administrative tasks.

* Subjective, Objective, Assessment and Plan (SOAP) note

Previous
Previous

Streamlining Commission Reconciliation: An AI Approach

Next
Next

Call Summarization: comparing AI and human work