Erik Swanson of Kaufman Hall talks about the way that health systems will be expanding use of AI, and where they will move a bit more slowly.
Many hospitals are showing improved financial performance after a rough couple of years, and they’re now looking to take a more strategic approach, experts say.
Analysts from Vizient and Kaufman Hall looked at trends for health systems in 2025. In a report released last week, they projected some of the new strategies they’ll be pursuing in the coming year.
Erik Swanson, senior vice president of data and analytics at Kaufman Hall, talked with Chief Healthcare Executive® about some key trends to watch in the coming year. Perhaps not surprisingly, Swanson said health systems will continue to look for new ways to incorporate artificial intelligence to improve their operations.
“Through our survey of a number of our members and clients that we work with, AI machine learning is becoming a prominent portion of strategy that they're taking, particularly in the advancement of improving operational outcomes and clinical outcomes,” he said.
Swanson discusses some of the ways health systems are going to be using and expanding use of AI, and where they are likely to be more cautious. He also offered some insight for health systems on proper governance for developing AI tools.
‘Less glamorous’ uses
Health systems may see more emphasis and gains on the “less glamorous” uses of AI to improve their operations, such as finding new efficiency and automating more functions.
Some health leaders, even those who are enthusiastic about AI’s potential to improve healthcare, have grown wary over some of the hype of AI-based tools. Swanson said he’s hearing more nuanced conversations about AI, and health leaders are asking more questions about ways AI-powered tools will solve problems, as opposed to simply adding new technologies and finding a way to use them.
“One of the important shifts here, and frankly, this is true across any industry, but particularly in healthcare, the identification of those business challenges and problems first is the most critical piece,” Swanson says. “And then, if it so happens that AI or machine-learning-based tools are ways in which those challenges can be solved, then the application of that makes a lot of sense. That is where you’re seeing some of the shift in conversation.”
Health systems are looking at AI solutions that can help streamline revenue cycle management that can help handle the high volume of claims. They also will look at uses for AI that may not be sexy but are “far more practical and results-driven,” Swanson says.
Health systems can also use some predictive tools to project future volumes and help improve their budgeting process, he says.
Clinical uses
Swanson said he expects health systems will move more quickly in utilizing AI technologies on the operational side of healthcare than in clinical decision support or remote patient monitoring. He says there are a lot of efforts in those areas and great value to be unlocked, but he says health systems are being more cautious about incorporating AI in clinical areas.
“For a risk averse industry, which healthcare is, I think we'll see a lot of focus and work in really that science of healthcare operations and delivery in this upcoming year,” Swanson says.
To be sure, health systems are using AI in the clinical space, but it’s happening a bit more slowly, Swanson said. Health systems recognize that there need to be safeguards in order to use AI tools in treating patients, he says.
“There needs to be appropriate controls put in place,” he says. “This is not only appropriate data governance, but ensuring that there's a human in the loop in many of these instances, that the models that that you are using have been thoroughly tested, and that there is effectively maintenance and oversight on these models over time to ensure that they're not drifting in directions one may not choose.”
Swanson sees one lower risk area for AI in clinical use is in risk stratification, essentially serving as early warning systems for some patients that could be at higher risk for developing certain conditions, such as sepsis. Those tools could help identify patients that need more vigilance, and that’s an area where health systems are turning to AI tools, he said.
Health systems have been using AI solutions to help doctors summarize notes from seeing patients. Swanson said some systems are also using AI technology to summarize notes from nurses as clinicians hand off patients during shift changes.
“There's a lot of really interesting work occurring in this space that directly impacts patient care and patient satisfaction, and ultimately the outcomes that those patients receive,” Swanson said.
A need for governance
Hospitals need to carefully consider governance in the development of AI solutions, and that includes people, process and technology, Swanson said.
Organizations need to have appropriate people in place to offer guidance in the deployment of AI tools and how to manage and mitigate risk.
“This ranges from not only the clinical impact and outcome that has to be considered, but intellectual property,” he said. “And for models that are potentially being trained on patient data, how do we ensure that it's appropriately safeguarded?”
Health systems need to be sure that they are using accurate data in AI models, and that they are ethical and aren’t reflecting biases against some disadvantaged groups. A Stanford School of Medicine study, published in Digital Medicine, found some AI chatbot answers reflected racial bias.
Organizations need strong data governance “to ensure that you're getting the intended results and you're not accidentally including certain types of biases,” Swanson said.
Health systems need to be sure that AI data is accurate, and wary of the possibility of AI hallucinations, he added.
“I think there are some areas in which model accuracy is excellent,” Swanson said. “There are others in which it's quite poor. The key is being able to know how your models are performing, and evaluate that appropriately.”
Health systems also need to exercise caution in launching pilot programs of AI clinical tools, including regular audits to evaluate their performance.
“It is absolutely critical for organizations to begin to address those before they move deeply into the development and deployment of AI and machine learning tools,” Swanson said.