ECRI placed artificial intelligence high on its ranking of trouble spots. Marcus Schabacker, president and CEO of ECRI, explains his concerns.
Many industry observers say artificial intelligence has the potential to change healthcare dramatically, but some analysts and leaders have expressed the need for more guardrails for AI.
ECRI, an organization focused on patient safety, placed AI among the top 10 health technology hazards to watch in 2024. AI landed fifth on the list of troublesome areas.
Marcus Schabacker, MD, president and CEO of ECRI, tells Chief Healthcare Executive® that he could talk for days when asked about his concerns for AI in healthcare.
“We think there's enormous potential and AI to benefit healthcare to make it more reliable and more effective, but right now, we don't have the right mechanisms in place to make sure it is safe,” Schabacker says.
AI remains the hot topic at healthcare conventions and health leaders see enormous potential to improve diagnosis of patients. However, critics point out that AI isn’t foolproof, and AI-powered solutions can reflect racial bias.
Schabacker outlines a host of concerns about AI and its uses, such as whether algorithms were tested on diverse populations, or if they were focused largely on white men. AI models reflect the quality of the data they are using, so they can become biased toward a particular population group, he says.
“Once you have somebody who doesn't fit in that subset, you get a very wrong result,” he says.
Schabacker expresses concern about the lack of regulation from the Food and Drug Administration for AI tools. He says developers typically describe AI-powered solutions as “decision support” tools, so they get less FDA scrutiny.
That’s worrisome, because more doctors are going to end up using AI tools to support diagnosis, especially physicians who are overworked, Schabacker says.
He asks, “Is it really just decision support? Is the physician going to make the final decision?”
“We're very afraid that these decision support tools become actually decision-making tools,” Schabacker says. “And they're certainly not designed or regulated for it.”
Schabacker points out that “we really didn't do well” with another key innovation in healthcare 15 years ago: electronic medical records. Initially designed as a billing solution, electronic health records have become a ubiquitous workforce tool in healthcare.
“Let's not do the same mistake like we did with EMRs and just generally apply it to everything,” Schabacker says.
His message to policymakers: “You’re already behind. Don’t get further behind.”
“Get the right people together to think about what needs to be done to regulate this,” he says. “I'm not saying AI is bad, I think AI can tremendously help. But it's got to be done right. We need to have certain guidelines, design principles, an understanding on what is going into the algorithm. How do we test for it? What's the population and the biases, which might be included, and how do we take care of that? And then what kind of assurance, quality assurance, we need on an ongoing basis?”
“The more we can design safety features and principles in it, the less we need to correct it or test for it later,” he says. “So that's the call out to regulators to be really much, much more involved here.”
Schabacker also offers some words of warning for the healthcare industry.
“Don't let the guys in the garage develop that stuff,” he says. “Have a decent process, Make sure that you have relevant medical expertise as an input, and that is not one or two medical advisors. So there's a lot to be done here. But I’m afraid … we’re already behind the eight ball.”