He spoke at ViVE about the need for AI in health care, starting small and moving fast, weighing risks, and a future when not using AI could be grounds for malpractice.
Nashville – John Halamka, MD, doesn’t see any way to avoid using AI in health care in the future.
John Halamka, MD, president of the Mayo Clinic Platform, talks about AI in medicine at the ViVE digital health conference in Nashville.
Halamka is the president of the Mayo Clinic Platform, which works with providers, drug companies, medical device manufacturers and startups to develop new technologies to improve care. He’s also the chairman of the board of directors of the Coalition for Health AI.
During a conversation at the ViVE digital health conference, Halamka talked with Sonia Singh, chief insights officer of AVIA, about the growing importance of AI in health care. He noted that most industrialized societies are facing the challenge of low birth rates and aging populations “that aren’t having extensions of their healthy years.”
“They’re just living longer and they're sick, and so we aren't going to have enough caregivers to be able to deliver the care they need,” Halamka says. “We have to use AI.”
But he notes there are thorny questions about how quickly AI tools can and should be used, how AI in health care should be regulated, and the risks of adopting new models.
Start small, move fast
Halamka points to the growing use of ambient listening tools, which health systems are using to document patient visits, allowing doctors to have natural conversations with patients rather than typing notes on a computer. These AI tools also provide summaries of the patient encounter, enabling clinicians to save time and energy.
“We know ambient listening is the thing that will solve many business problems. Well, it's not perfect, but what's the risk if it goes bad? Pretty small. And so we say, ‘Oh, we're willing to take that risk,’” he says.
Mayo Clinic is now using an inpatient ambient nursing solution in Arizona and Florida that does “100% of the nursing charting without the nurse having to touch a keyboard.”
“Is it perfect? No, but good enough and low risk, and there's a human nearby who looks at everything documented before signing off,” he said.
Mayo Clinic has also developed other AI-powered models, including for chest X-ray interpretation. But he said, “They're all augmenting human behavior and not replacing the human.”
Halamka summed up the approach Mayo Clinic has taken in developing new tools. “Think big, start small, move fast,” he said.
He cited Mayo Clinic’s early foray into providing acute hospital care at home. The organization began with one patient in 2020 as the COVID-19 pandemic emerged.
“We started with one patient, and did that one patient and family end to end, studying every aspect of what we did well and not,” he said. “And then we moved to 10, and then 100 as of today, we've treated about 50,000 patients in their homes. The outcomes are the same or betterThe cost is less, patient satisfaction is higher.”
“We knew we would start small and we would move fast, but only after we studied the impact of the change we were going,” Halamka added.
He added that health systems need to continually evaluate AI tools.
“Think of AI like a pharmaceutical. You'll want to do post-market surveillance to discover where it breaks,” Halamka said.
Eventually, the standard of care
Some solutions may have deficiencies, but that doesn’t necessarily mean they need to be scrapped, although health systems must consider the ethical implications.
“There was a wonderful cardiology algorithm we have at Mayo Clinic that predicts cardiac mortality and morbidity, and it works really great if your body mass index is less than 31 and not so good if it’s above 35,” Halamka says. “So is it ethical to use an algorithm that has an inherent limitation? Well, yes, but only on people with a body mass index under 31.”
Singh posed an intriguing question: “Do you envision a world where not using AI in certain use cases could be considered malpractice?”
“Absolutely,” Halamka answered.
He noted that pancreatic cancer “is just really hard to see on a CT or MRI.” But he continued, “Mayo today has an algorithm that detects pancreatic cancer at stage zero, when it is treatable, on average, two years before any human radiologist can see it.”
“You’ve got to guess that in just a few years, it will become the standard of care to use AI as an augmentation,” Halamka said. “And of course, the definition of malpractice is you varied from the standard of care. So, AI will be part of what we do every day.”
Still, there are important societal questions about the use of AI in health care that must be addressed, and what level of accuracy is acceptable.
“If you're in a geography where there are no human doctors available, would you rather have nothing, or an algorithm that is imperfect? And so these are cultural questions we all have to answer,” he said.
Looking ahead
When Singh asked about AI solutions that may be more widespread in the near future, Halamka pointed to predictive AI and tools that will ask clinicians to consider a diagnosis. He described such tools as “fairly low risk.”
“The thing about predictive AI … it's math. Patterns. So you can test. You can decide if it's likely to be good or not. You can understand its risks and limitations. So sure, we are going to see a lot of physician workflow and nursing workflow augmented by predictive AI,” he said.
However, he said, generative AI use cases “are harder.”
“We know at the moment, despite whatever claims you may see, there is no such thing as hallucination-free generative AI,” Halamka said.
Singh asked Halamka what advice he would offer to hospitals and health systems that are weighing AI solutions. He noted that academic medical centers are going to have different needs than community hospitals and federally qualified health centers.
“Each of them is going to have a different rate of adoption and a different set of needs,” he said. “So what you should probably ask is, what are the business problems I have today, and how can I address them more rapidly by introducing technology?”
Halamka also said academic centers developing new AI tools and solutions have a duty “to make sure they're getting into the hands of the federally qualified health centers and critical access hospitals first.”
At the conclusion of the session, Singh asked Halamka what he expected the conversation about AI in health care to look like a year from now.
“There will be, from all of us, the success stories of what has worked, what has improved patient care, led to earlier diagnosis, faster treatment, lower cost and better outcomes,” Halamka said.
“Because it becomes real,” he added. “I mean, we really are at the point in 2025, the data is good enough, the technology is getting good enough, the compute is getting available enough, and the use cases are getting clearer. So next year, we talk about what's real.”
Telehealth faces a looming deadline in Washington | Healthy Bottom Line podcast
February 12th 2025Once again, the clock is ticking on waivers for telemedicine and hospital-at-home programs. Kyle Zebley of the American Telemedicine Association talks about the push on Congress and the White House.
Three of four Americans affected by health data breaches | ViVE 2025
February 19th 2025Cyberattacks aimed at health organizations impacted more than 259 million people, says John Riggi of the American Hospital Association. He talks about the growing threats and the need for a committed response.