Leaders, innovators, and clinicians must work together to develop key principles guiding the adoption of AI in a way that strengthens the healthcare system and supports patients.
The COVID-19 pandemic and AI may seem unrelated, but they are, in fact, inextricably linked—particularly in the context of healthcare.
The pandemic exacerbated systemic problems in our healthcare system, including clinician burnout, healthcare inequities, high costs, and rampant inefficiencies. AI has the potential to address these challenges in meaningful and measurable ways by increasing productivity and improving diagnoses and patient outcomes.
Leaders in healthcare see AI as a silver bullet – with the establishment of appropriate guardrails and regulations.
When I think about the possibilities for AI in healthcare, I look to a recent study commissioned by GE HealthCare to help the healthcare industry add context and insights to better defining the barriers to creating a better healthcare system. The study amplified voices of thousands of clinicians and patients from eight countries to shine a spotlight on their personal perspectives as a critical and frequently overlooked part of this conversation.
The study found that 60% of surveyed clinicians believe it is very important to use advanced technology to make basic clinician tasks more efficient; however, trust issues remain. In the United States, for example, only 26% of surveyed clinicians believe AI data can be trusted. Concern stemmed from perceived lack of transparency, risk of overreliance, legal and ethical considerations, limited training data, and fear of job displacement.
Medicine is a complex field that requires a combination of scientific evidence, clinical expertise, and patient interaction. Some clinicians may worry that these factors are not correctly balanced or omitted from AI-generated recommendations and that excessive reliance on AI could diminish the human element of care, potentially leading to depersonalization of medicine.
AI models require high-quality data to ensure accurate predictions. However, in medicine, obtaining such data can be challenging due to privacy concerns and limited comprehensive datasets or access to experts to train data models. While the healthcare industry has experienced a digital transformation over the past decade, most of this information is “unstructured data” that needs to be extracted and transformed before it can be searched and analyzed.
According to World Economic Forum, it is estimated that 97% of data goes unused since it lacks structure and is error prone. The process to extract this information is labor-intensive and operationally complex, costing significant resources from health organizations.
Clinicians may be skeptical of AI systems because they have a perception that they have been trained on incomplete or biased data, which can return misleading results. Efforts to demystify this technology by increasing transparency in the source of data and training process will help build critical trust among both patients and providers.
A thoughtful, data-driven approach is key to building confidence among clinicians and patients and driving AI forward in healthcare. Leaders, innovators, and clinicians must work together to develop key principles guiding the adoption of AI in a way that strengthens the healthcare system and supports patients.
AI will be largely shaped by the highly regulated environments in which it is applied. Regulatory policies and payer reimbursement models must be designed by stakeholders and policymakers working together to provide the appropriate guardrails to ensure accuracy and performance of the underlying machine learning models.
AI has the potential to drive efficiencies across the healthcare system, lessen doctors’ administrative burdens, improve and confirm diagnoses, and support a more tailored, patient-specific approach to treatment protocol. No technology will ever replace clinicians, but AI can be used as an “intelligent assistant” to improve efficiencies and reduce clinicians’ administrative burden so they can do what they do best: take care of patients.
Recent advancements in machine learning and AI are already being used to reduce scan times, monitor patient health inside and outside the hospital, reduce patient wait times and automate administrative tasks like scheduling, updating medical records, and processing paperwork.
A deep learning application embedded on women’s health ultrasounds, for example, has reduced keystrokes by 80% for fetal brain exams making them simpler, faster, and more reproducible for clinicians. In some cases, MRI scan times have been cut in half while simultaneously improving image quality – supporting a better patient and provider experience.
Now is the time to shape the AI roadmap to improve healthcare in ways that align to support AI designed for the benefit, safety, and privacy of the patient. For AI to be viewed as a trusted steward of medical data and insights it must be transparent, deliver robust and reproducible results, and guard against creating or reinforcing bias.
GE HealthCare is calling for collaboration among industry stakeholders to develop an environment that ensures responsible use of this technology, drives efficiencies, improves efficacy, protects privacy, promotes equity, and supports data and device interoperability for the future of healthcare.
Dr. Taha Kass-Hout is chief technology officer for GE HealthCare.