Why AI remains a top concern for patient safety

News
Article

Many healthcare leaders say artificial intelligence offers the promise of improving care. Marcus Schabacker, CEO of ECRI, says there need to be more safeguards.

Plenty of healthcare leaders say they are excited about the potential of artificial intelligence in delivering better care, but a leading patient safety group is warning of possible harm to patients.

ECRI, a nonprofit organization, placed AI on its annual list of the top threats facing patients in 2025. AI landed near the top of the list, ranking second behind only the dismissal of patient and family concerns about. AI also ranked among the leading health technology threats to patients a year ago.

Marcus Schabacker, MD, president and CEO of ECRI, has voiced concerns about the use of AI in the healthcare industry. He’s not opposing the development and use of AI by hospitals and clinicians, but he fears there aren’t enough safeguards. He outlined those concerns in an interview with Chief Healthcare Executive®.

“We continue to have concerns because, particularly the larger healthcare providers … they don't have a good governance structure to oversee the utilization of AI,” he says. “And you don't, you know, have to stretch your imagination too far to think that if you do not have good governance in place, these new tools can be abused or used in the wrong context.”

(See part of our conversation in this video. The story continues below.)

Schabacker says health systems that are going to be using AI in patient care need to have strong governance structures in place.

Before health systems adopt AI tools, Schabacker says that leaders should ask, “How was that AI developed? What is the testing pool? Is there an intrinsic bias in the testing pool?”

He notes that AI models that have been tested on healthy, young white men won’t necessarily be as successful with older Latino men.

ECRI has advocated for more disclosure on AI tools, similar to the nutrition labels found on foods in the supermarket. He said there needs to be more transparency about the study pool and statistical methods of AI algorithms, along with information about the testing.

Many hospitals are focusing on using AI technology to automate some tasks and ease some burdens for workers, such as filing claims or documenting patient visits. Some health systems are using AI tools for clinical uses, including supporting a diagnosis.

But with doctors facing demanding schedules and pressed to see as many patients as possible, Schabacker worries that clinicians could become overly reliant on AI tools in making a diagnosis.

“These tools, which are claimed to be decision-supporting, very quickly become decision-making,” Schabacker says.

If doctors are pressed for time, he says, “You are going to rely on anything which seems reasonable.”

Schabacker says if technologies haven’t been thoroughly tested, doctors could end up using solutions that could provide “a very wrong recommendation.”

“When you tend to rely on those recommendations, they become very quickly decision-making tools, and that's what really concerns us,” he says. “And there's very, very little oversight.”

Schabacker also said he’d like to see more oversight from the government of AI tools.

“A lot of these AI tools are labeled as decision support, which then requires little or no oversight from the federal government at all,” he says. “And that, to us, is the biggest concern.”

ECRI says AI could lead to medical errors that lead to injury and death. The group also says the use of AI tools could make it more difficult to determine if patient complications are tied to AI, and that could make it harder to identify mistakes.

Researchers at the University of Minnesota School of Public Health have looked at the growing use of AI in hospitals, and found some areas of concern.

Two in three American hospitals (65%) are using AI-powered predictive models, according to findings published by Health Affairs in January. Hospitals are using the tools to project the health trajectories of patients and identify patients with higher risks of complications after they leave the hospital.

But not all hospitals reported safeguards on the performance of those models. Researchers found 61% of hospitals were examining their predictive models for accuracy, while 44% were examining those models for bias. Hospitals with better funding were more likely to have developed methods to analyze their models for accuracy and bias.

Researchers have voiced concerns about AI tools reflecting racial bias, which also can reduce their accuracy.

Two out of three doctors (66%) say they’re using AI in some form of their practice, up from 38% in 2023, according to an American Medical Association survey released last month.

Doctors are “increasingly intrigued” by the potential of AI technologies to develop better and more personalized treatments, Jesse M. Ehrenfeld, MD, the former AMA president, said in a statement accompanying the survey.

“But there remain unresolved physician concerns with the design of health AI and the potential of flawed AI-enabled tools to put privacy at risk, integrate poorly with EHR systems, offer incorrect conclusions or recommendations, and introduce new liability concerns,” Ehrenfeld said.

Recent Videos
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image: Chief Healthcare Executive
Related Content
© 2025 MJH Life Sciences

All rights reserved.