AI-enabled ambient listening tools are rapidly gaining traction in healthcare settings around the world. These solutions can significantly reduce provider documentation time, capture more complete information from patient encounters, and free up clinicians to spend more meaningful time face-to-face with patients.

However, like any emerging technology, these tools introduce risk. Healthcare organizations should carefully evaluate these tools and maintain strong oversight to ensure accuracy, privacy, and compliance.

Below are key takeaways Dean Dorton’s Healthcare Internal Audit team has identified when reviewing these tools and analyzing encounters generated with AI transcription.

Be Proactive – Provide Tools Before Providers Find Their Own

Physicians can be eager adopters of tools that enhance productivity and care quality. If your organization isn’t actively identifying and vetting ambient listening technologies, providers may seek out unsanctioned tools on their own.

This can lead to serious concerns about patient privacy and data security, especially when tools fall outside your electronic health record environment.

Review Retention Policies and Storage Practices

Understand what form patient interaction recordings take and how long they are stored.

Short retention windows or audio-only storage formats may limit your organization’s ability to validate whether AI-generated documentation accurately reflects the encounter.

Monitor Templates Used in Documentation

Ambient listening tools often generate documentation using predefined templates that may include elements such as a physical exam.

In some cases, the physical exam field may auto-populate with standard “healthy” terminology since these details are rarely spoken aloud. Ensure providers carefully review and update template fields so records accurately reflect the patient’s true condition.

Scrutinize Medication and Prescription Details

Medication names are a frequent challenge for AI transcription tools. Slight errors in recognition can lead to documentation or prescribing inaccuracies.

Provider diligence is essential to confirm that the medication names, dosages, and instructions are correct before finalizing the record.

Require Hard Stops for Reviewing Orders

Depending on training data, AI tools may incorrectly substitute diagnostic test names when generating suggested orders.

Implement a mandatory review step (a hard stop) for providers to verify all AI-generated suggested orders before approval to prevent incorrect or inappropriate orders from being placed.

Educate Providers on Common Pitfalls

Beyond the specific concerns pointed out above, providers should be on the lookout for other situations that can result in the tool producing inaccurate medical record information, namely:

  • When a family member or companion joins the conversation and contributes information
  • When irrelevant or off-topic dialogue occurs during the visit
  • When recording continues after the encounter ends, potentially capturing details about another patient and misplaces them in the record.