Any AI partner in healthcare must support the safeguarding of protected healthcare information and possess a proven track record of innovation.
A quick Google search for “AI partner” returns nearly six billion results, and “AI healthcare partner” returns 1.6 billion results, which makes finding a technology partner seem daunting.
However, the importance of AI cannot be denied. Artificial intelligence has permeated every industry, promising easier workflows and unparalleled insights into everything from consumer buying habits to healthcare diagnostics.
The global AI market was estimated at $328 billion in 2021 and is expected to grow by $1 trillion by 2029, underscoring the explosion of AI technology.
For all the hype surrounding artificial intelligence, machine learning (ML), and natural language processing (NLP), actual results are decidedly mixed — especially on the diagnostic side. In healthcare, mixed results will not lead to better outcomes. In fact, a missed diagnosis can have life-changing consequences.
Unlike most other industries, healthcare is intimate, from the very nature of healthcare to the protection of the information arising from health encounters.
Any AI partner in healthcare must understand the unique nature of the business, support the safeguarding of protected healthcare information, be able to automate business processes, and possess a proven track record of innovation.
People are not like widgets
The human population is endlessly diverse, and any AI healthcare solution must account for this diversity. Biases can occur when technology is developed and validated using populations that are not representative.
A recent editorial in The Lancet suggests shifting the focus of healthcare AI from demonstrating its strengths to proactively discovering its weaknesses, noting the potential consequences of errors.
The authors cite the deterioration over time of an algorithm used by a U.S. electronic health records (EHR) system to predict sepsis risk in response to changing inputs.
Those changing inputs included codes, increased volume and diversity of patient data, and operational behaviors of providers that, when combined, brought a 27% decrease in prediction accuracy over a decade. Oversight of AI projects must extend beyond product release to continually validate and confirm findings.
Medical experts in the UK have suggested a “medical algorithmic audit” to ensure patient safety and performance of AI solutions, bringing together users and developers. The study raises concerns about the efficacy of AI across demographic groups.
Documentation of algorithmic derivations should be part of the evaluation process for AI systems, and evaluators should ask pointed questions about populations used and whether or how often datasets are validated.
Privacy and ethical considerations
Obviously, emerging AI/ML solutions must adhere to the same privacy and security standards under HIPAA that other solutions that collect, store, and transmit PHI do. But solutions like these should also adhere to responsible and ethical AI practices, territory that remains the Wild West in the absence of regulation.
An alphabet soup of agencies directly and indirectly responsible for healthcare regulations are working on guidance governing AI, including the Centers for Medicare & Medicaid Services, the Federal Drug Administration, Federal Trade Commission, U.S. Department of Health and Human Services, and the World Health Organization. Agencies have enlisted AI developers, providers, patients, academics, and other stakeholders to find the right path forward, balancing the potential of AI to transform diagnostics and administration with the need to promote health equity and eliminate implicit bias.
Issues with facial recognition software and its perceived bias against people of color point to the importance of validating AI technology before it is used with patients in real-world situations.
Administrative functions prime for AI transition
While AI certainly is being used in diagnostics, the technology is more mature on the administrative side of healthcare.
Emerging administrative uses include referral management, prior authorizations, care management prioritization based on acuity, orders based on diagnosis or procedure, and identifying patient demographics to link images — such as faxes, scans, and PDFs — to patient records.
Many organizations remain skeptical about adopting AI workflow solutions, but skepticism should not deter progress. For workflow improvements, look for solutions that pair automated review mechanisms with human oversight to build trust in AI.
Extracting data from fax transmissions requires a high level of human effort and data entry that can be inaccurate and time consuming, but can be accomplished quickly and easily using natural language processing and AI.
The key is to start with a proof of concept, focusing on incremental workflow improvements through NLP and AI that can be measured for results rather than looking for a complete transformation.
In order to maximize the value of these technologies, systems must be willing to change and adapt their workflow to best implement NLP and AI in order to achieve measurable results.
If the goal is to shorten the time for approval of an authorization that would normally require human intervention and manual data entry, the changes in the workflow should minimize the human intervention and redeploy those resources effectively. That way the cost savings and time for approval can be measured, including faster treatment that can lead to better health outcomes.
In the aviation industry, automation technology has evolved to the point where passenger planes can theoretically take off and land by themselves (although FAA regulations mostly prohibit this).
The same tenets hold for diagnostic AI uses — first look to technology that assists providers and technicians rather than replacing them. In both cases, users are likely unwilling to cede control solely to technology.
Practical uses for AI/NLP technologies
Despite continued advances in healthcare technology to improve diagnostics and ease the flow of information along the care continuum, 70% of healthcare communication remains via fax.
When used in the proper environment, faxing meets HIPAA standards and is straightforward to use. However, faxed documents are basically images. In order to incorporate a fax into a patient record, someone must manually transcribe that information into a format the EHR will recognize, a time-consuming and inefficient process.
A state-run healthcare organization sought a better way to manage Medicare and Medicaid insurance claims through the approval process than its system of managing thousands of unstructured documents.
Using AI and NLP technologies, the organization was able to bring forth relevant information from these formerly unstructured documents to reduce manual processes and increase workflow efficiency and throughput.
Healthcare is taking tentative steps toward technologies that employ artificial intelligence, machine learning, and natural language processing to improve workflows across the landscape, from the back office to the treatment room.
As with any emerging technology, the best approach combines taking small steps toward adoption and transparency into whether the technology is performing as expected.
Any technology should adapt to existing workflows, not cause workflows to contort to the technology. The adoption of AI in healthcare will continue to grow but will truly explode once health equity and validation concerns are addressed.
Jeffrey Sullivan serves as Chief Technology Officer for Consensus Cloud Solutions, Inc.
Cybersecurity panel: How hospitals can protect their patients and their systems
November 18th 2024Chief Healthcare Executive® presents the final installment in our series, with experts from HIMSS, the American Hospital Association, and Providence. In this episode, our panel offers advice on how health systems can improve.