Navigation

© Zeal News Africa

Doctors Think AI Shows Promise-But Worry It Will Mislead Patients

Published 18 hours ago5 minute read

getty

F, healthcare workers have had to deal with “Doctor Google”—when patients turn to Google for medical advice instead of a professional. It’s a practice that organizations including Brown University Health, Orlando Health and the Northeast Georgia Physicians Group have advised against, citing both the years of education and experience physicians have as well as the tendency for people to gravitate to worst-case scenarios.

Today, Doctor Google has a new rival: ChatGPT.

That’s according to a new report from academic publishing company Elsevier, which also makes AI tools for doctors, such as research assistant Scopus AI and chemical discovery tool Reaxys. For the report, which will be released today, the company surveyed 2,206 doctors and nurses from 109 countries this past spring. This included 268 clinicians in North America, 1,170 in the Asia Pacific, 439 in Europe, 164 in Latin America and 147 in the Middle East and Africa. Eighteen declined to disclose their location.

These clinicians were asked about the role they thought AI plays in healthcare today and its potential implications for the future. The company emailed a link to the survey to healthcare workers who had recently published books or journal articles, served on certain third-party panels or were otherwise known to Elsevier. The authors acknowledged that because it’s not a randomized sample, these results are not necessarily generalizable.

One of the biggest concerns the survey respondents had was regarding patient use of ChatGPT and similar tools. They reported that patients often arrive with preconceived—and sometimes wrong—ideas about their health issues because of what these tools provided. One major issue? These models are frequently wrong. For example, OpenAI’s 03 and 04-mini models hallucinate—that is, makes up its own answers to questions—around 30 to 50% of the time, per the company’s own recent tests.

This creates an additional burden for healthcare workers, who are often already overwhelmed by their work and the amount of patients they see. In North America, 34% of clinicians who reported being time-strapped noted that patients have numerous questions. Globally, this number was about 22%.

Even more concerning, Jan Herzhoff, president of Elsevier’s global healthcare businesses and a sponsor of the study, told Forbes, is that patients may decide to skip the hospital altogether and rely solely on ChatGPT and other websites for advice. He said that over 50% of U.S.-based clinicians predict that most patients will self-diagnose rather than seeing a professional within the next three years, though it’s not clear how often patients are skipping their doctor in favor of AI right now.

Though healthcare workers may have concerns about their patients’ use of AI, more are finding themselves using such tools. In the past year, the percentage of doctors and nurses who have used AI in a clinical setting jumped from 26% to 48%, the survey found. The survey respondents are also optimistic about AI’s ability to streamline their workflows, though at the same time few say that their own institutions were using AI to effectively solve current problems.

A majority of clinicians surveyed predicted that AI will save them time, provide faster and more accurate diagnoses, and improve patient outcomes within the next three years. Many startups are developing such tools, including K Health and Innovaccer, which have respectively raised $384 million and $675 million in total venture funding as of recent. According to PwC’s Strategy& team, the AI healthcare market is expected to reach $868 billion by 2030.

“As an organization, we see AI as a tool to augment the capabilities of the clinician,” Herzhoff said. Not one that replaces them.

Herzhoff himself is equally optimistic about AI in healthcare, especially for administrative tasks. Right now, the survey finds that doctors and nurses are using AI tools to identify potentially harmful drug interactions before writing a new prescription and or to write letters to patients. Of clinicians who’ve used AI, 50% use generalist tools like ChatGPT or Gemini frequently or always; only 22% use specialist tools with the same frequency.

One reason for this might be how healthcare systems are deploying AI tools. Only about a third of clinicians said that their workplace provides sufficient access to digital tools and AI training. Only 29% said that their institutions provide effective AI governance.

As more healthcare-focused AI tools are developed, ensuring that these new technologies are trained on high-quality, peer-reviewed material is another priority for those surveyed, with 70% of doctors and 60% of nurses saying this is vital.

Herzhoff noted that while AI tools may help clinicians save time in the future, the effort they need to put in now to learn AI, particularly how to write detailed and useful prompts, presents a barrier. Already, 47% of clinicians report that being tired has impacted their abilities to treat patients.

“Paradoxically, they don’t have the time, and they don’t find the time to use these tools,” Herzhoff said.

Although doctors and nurses look forward to using AI to speed up their day-to-day work, they are much more skeptical about using it to make decisions about patients. Only 16% said they were using AI in this way, while 37% expressed an unwillingness to use AI to make clinical decisions.

“AI should provide the information I need to make good decisions,” one doctor who responded to the survey said. “I don’t believe I should abrogate responsibility for clinical assessment to AI—I need to keep authority over the final outcomes.”

Origin:
publisher logo
Forbes
Loading...
Loading...
Loading...

You may also like...