Re: The "ambient scribe" tools listening to and summarising your doctor-patient consultations
Dear Editor
Chris Stokel-Walker’s article on the growing use of AI scribing tools in UK general practice captures the enthusiasm surrounding this emerging technology. But as an academic clinician currently reviewing the regulatory status of AI scribes available to UK healthcare providers, I believe a more critical lens is needed, particularly around the risks faced by non-specialist users and the lack of governance structures to ensure safe adoption.
While these tools promise to ease documentation burden and restore focus to the clinician–patient relationship, not all AI scribes are created equal. Products vary significantly in functionality, regulatory compliance, and cost. Some tools are priced at a premium and backed by NHS pilot schemes and published safety documentation. Others often marketed directly to individual clinicans, are available at low or no cost but lack transparency around regulatory status or data governance. This price-point disparity introduces risk: clinicians, especially those with limited AI literacy, may opt for cheaper solutions unaware that these tools may not meet UK medical device regulations or NHS safety standards.
Only a minority of digital scribes available for use have verifiable MHRA class 1 registration listed on Public Access Registration Database (PARD) Despite NHS England’s April 2025 guidance clarifying that advanced AI scribes may qualify as medical devices, many companies rely on vague compliance language without offering evidence. In the absence of a central registry or regulatory flagging, clinicians are left to assess safety claims themselves, something few are trained or resourced to do.
Primary care also lacks a robust culture of documentation audit, making it harder to detect or respond to subtle errors introduced by generative AI. These may include misinterpretations, hallucinations, or inappropriate clinical inferences that go unnoticed until propagated across electronic systems. This risk could be amplified when AI is also entering codes into documentation. Automation bias compounds the risk, particularly when clinicians are under time pressure and the AI output appears plausible.
The unregulated nature of this market risks undermining trust in a promising technology. If lower-cost, under-compliant tools gain traction due to ease of access and affordability, the NHS and unwary clinicians could see uneven quality, increased medico-legal risk, and avoidable harm to patients. A national register of AI documentation tools, alongside clearer guidance on consent, error-handling, and auditability, is urgently needed to protect clinicians and patients alike.
Innovation should not be at the expense of accountability. Without regulatory clarity and governance infrastructure, AI scribes may introduce more risk than relief.
Sincerely,
Claire Dady
Advanced Nurse Practitioner & PhD candidate (AI in Clinical Documentation)