• Politics
  • Diversity, equity and inclusion
  • Financial Decision Making
  • Telehealth
  • Patient Experience
  • Leadership
  • Point of Care Tools
  • Product Solutions
  • Management
  • Technology
  • Healthcare Transformation
  • Data + Technology
  • Safer Hospitals
  • Business
  • Providers in Practice
  • Mergers and Acquisitions
  • AI & Data Analytics
  • Cybersecurity
  • Interoperability & EHRs
  • Medical Devices
  • Pop Health Tech
  • Precision Medicine
  • Virtual Care
  • Health equity

The case for slowing down clinical AI deployment | Viewpoint

Opinion
Article

There's a compelling argument for healthcare as a whole to "slow the roll." This pause is not about stifling innovation but rather ensuring that AI tools are developed responsibly.

Artificial intelligence whispers tantalizing promises of revolution in hospital corridors and research labs across the globe.

Image: Medicomp Systems

There's a compelling argument for healthcare to slow down the deployment of AI in clinical uses, Jay Anders writes.

From enhancing diagnostic accuracy to streamlining clinical workflows, AI's potential in healthcare seems boundless. Yet, as the allure of these high-tech solutions grows, so too does a shadow of concern. Recent findings have exposed a critical flaw in the foundation of many AI medical technologies, highlighting an urgent need to pump the brakes on their hasty deployment.

Many AI medical devices lack real patient data

A recent study published in Nature Medicine has revealed a startling truth about FDA-approved AI medical devices: nearly half of them have not been trained on actual patient data. This finding, uncovered by researchers from the UNC School of Medicine, Duke University, and other institutions, raises serious questions about the credibility and effectiveness of these AI tools in real-world clinical scenarios.

As Sammy Chouffani El Fassi, the study's lead author, points out, "Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data." This revelation underscores a fundamental disconnect between regulatory approval and clinical efficacy, potentially putting patient safety at risk.

The complexity of AI development in healthcare

Developing AI-based medical tools presents unique challenges. The healthcare industry faces a constant struggle between the need for high-quality data and the ethical and practical constraints of accessing real patient information. While synthetic data offers a potential workaround, it often faces criticism for not accurately representing the complexities and nuances of real patient cases.

This dilemma highlights the intricate balance required in 21st-century medical AI development. On one hand, progress in AI capabilities promises to revolutionize patient care. On the other, the use of inadequate or non-representative data could lead to biased, inaccurate, or potentially harmful outcomes.

For medical AI to be considered reliable and trustworthy, there must be openness and clarity regarding how its core algorithms are developed and evaluated. This transparency is crucial for uncovering any inherent biases and effectively communicating possible dangers or adverse effects.

Transparency is critical, but lacking

Transparency emerges as a crucial factor in addressing these concerns. As the healthcare industry grapples with the rapid proliferation of AI tools, there's an urgent need for clear, accessible information about how these systems are developed, trained, and validated.

A study published in Frontiers in Digital Health emphasizes this point, revealing that the public documentation of authorized medical AI products in Europe lacks sufficient transparency to inform about safety and risks. Transparency scores for these products ranged from a mere 6.4% to 60.9%, with a median of 29.1%. This lack of openness makes it difficult for healthcare providers and patients to make informed decisions about the use of AI tools in clinical settings.

The case for slowing down

Given these findings, there's a compelling argument for healthcare as a whole to "slow the roll" on clinical AI deployment. This pause is not about stifling innovation but rather ensuring that AI tools are developed and implemented responsibly, with patient safety as the paramount concern.

Several key reasons support this cautious approach:

Data quality: As the initial study revealed, many AI tools lack training on real patient data. A slowdown would allow time for developers to source and incorporate high-quality, diverse patient data into their models.

Validation and testing: More thorough clinical validation studies are needed to assess the real-world performance of AI tools across diverse patient populations and healthcare settings.

Transparency: A pause in rapid deployment would provide an opportunity for developers and regulators to establish clear guidelines for transparency in AI development and validation processes.

Ethical considerations: Slowing down allows for a more comprehensive examination of the ethical implications of AI in healthcare, including issues of bias, fairness, and patient privacy.

Regulatory framework: This period could be used to develop more robust regulatory frameworks that better align FDA clearance with clinical effectiveness and safety standards.

A balanced approach

While the case for slowing down is strong, it's equally important to recognize the potential benefits of AI in healthcare. The goal should not be to halt progress but to ensure that advancement occurs responsibly and with due diligence.

During this slowdown period, the healthcare industry should focus on establishing comprehensive standards for AI development and validation in clinical settings. These standards would serve as a benchmark for quality and safety, ensuring that AI tools are built on a solid foundation of reliable data and rigorous testing.

Alongside these standards, there's a pressing need for collaborative platforms that facilitate the sharing of high-quality, anonymized patient data for AI training and testing. Such platforms would address the data shortage issue while maintaining patient privacy and ethical standards.

Transparency should be at the forefront of AI development in healthcare. Comprehensive guidelines for AI documentation and reporting need to be developed and implemented across the industry. These guidelines would ensure that healthcare providers, patients, and regulators have access to clear, understandable information about how AI tools are developed, trained, and validated.

To ensure ongoing safety and efficacy, rigorous post-market surveillance systems must be implemented to monitor AI performance in real-world settings. These systems would allow for rapid identification and addressing of any issues that arise as AI tools are used in diverse clinical environments.

Fostering interdisciplinary collaboration between AI developers, healthcare providers, ethicists, and policymakers is crucial. This collaborative approach would ensure that the complex challenges of AI in medicine are addressed from multiple perspectives, leading to more robust and ethical solutions.

The promise of AI in healthcare is undeniable, but so too are the risks of hasty implementation. By taking a measured approach and prioritizing the use of high-quality patient data, transparency, and thorough validation, we can work towards a future where AI truly enhances patient care without compromising safety or ethical standards.

As we navigate this emerging and increasingly complex terrain, the words of Karin Rolanda Jongsma and colleagues in their Nature Digital Medicine article serve as a fitting reminder: "We conclude that it is important to remain conscious and critical about how we talk about expected benefits of AI, especially when referring to systemic changes based on single studies."

By slowing down and addressing these crucial issues head-on, we can build a stronger, more reliable foundation for the future of AI in healthcare—one that truly puts patients first.

Jay Anders, MD, is chief medical officer of Medicomp Systems.


Recent Videos
Image: Ron Southwick, Chief Healthcare Executive
Image: Ron Southwick, Chief Healthcare Executive
Image credit: ©Shevchukandrey - stock.adobe.com
Image: Ron Southwick, Chief Healthcare Executive
Image credit: HIMSS
Related Content
© 2024 MJH Life Sciences

All rights reserved.