Log In

New large language model helps patients understand their radiology reports

Published 4 days ago3 minute read

Imagine getting an MRI of your knees and being told you have “mild intrasubstance degeneration of the posterior horn of the medial meniscus.”

Chances are, most of us who didn’t go to medical school are not going to be able to decipher that jargon as anything meaningful or understand what is actionable from that diagnosis. That’s why Stanford radiologists developed a large language model to help address patients’ medical concerns and questions about X-rays, CTs, MRIs, ultrasounds, PET scans, and angiograms.

Using this model, a patient getting a knee MRI could get a more useful and simple explanation: Your knee’s meniscus is a tissue in your knee that serves as a cushion, and, like a pillow, the meniscus has gone a little flat but still can function.

This LLM – dubbed “RadGPT” – can extract concepts from a radiologist’s report to then provide an explanation of that concept and suggest possible follow-up questions. The research was published this month in the Journal of the American College of Radiology.

Traditionally, medical expertise is needed to understand the technical reports radiologists write about patient scans, said Curtis Langlotz, Stanford professor of radiology, of medicine, and of biomedical data science, senior fellow at the Stanford Institute for Human-Centered AI (HAI), and senior author of the study. “We hope that our technology won’t just help to explain the results, but will also help to improve the communication between doctor and patient.”

Since 2021, under the 21st Century Cures Act, patients in the United States have had federal protection to get electronic access to their own radiology reports. But tools like RadGPT could get patients more engaged in their care, Langlotz believes, because they can better understand what their test results actually mean.

“Doctors don’t always have the time to go through and explain reports, line by line,” Langlotz said. “I think patients who really do understand what’s in their medical record are going to get better care and will ask better questions.”

To develop RadGPT, the Stanford team took 30 sample radiology reports and extracted five concepts from each report. With those 150 concepts, they developed explanations for them and three question-and-answer pairs that patients might commonly ask. Five radiologists who reviewed these explanations determined that the system is unlikely to produce hallucinations or other harmful explanations.

AI is still a ways away from being able to accurately interpret raw scans. Instead, the current RadGPT model depends on a human radiologist dictating a report, and only then will the system extract concepts from what they have written.

“As with any other healthcare technology, safety is absolutely paramount,” said Sanna Herwald, the study’s lead author and a Stanford resident in graduate medical education. “The reason this study is so exciting is because the RadGPT-generated materials were generally deemed safe without further modification. This means that RadGPT is a promising tool that may, after further testing and validation, directly educate patients about their urgent or incidental imaging findings in real time at the patient’s convenience.”

While this LLM still has to be tested in a clinical setting, Langlotz believes the LLMs that are the underpinnings of this technology will not only benefit patients in getting answers to common medical questions but also radiologists, who can either be more productive or be able to take breaks to reduce burnout.

“If you look at self-reports of cognitive load – the amount of work your brain is doing throughout a day – radiology is right at the top of that list.”

This story was originally published by Stanford Institute for Human-Centered AI.

Vignesh Ramachandran

Groundbreaking innovations that begin in Stanford labs flow freely into private industry to improve human well-being, fuel the economy, and strengthen American competitiveness.

Learn more

Origin:
publisher logo
stanford
Loading...
Loading...
Loading...

You may also like...