A multi-society statement aims to set expectations about the use of AI in radiology.
Photo/Thumb have been modified. Courtesy of agsandrew - stock.adobe.com.
Artificial intelligence (AI) software can help radiologists perform their jobs better. But the ethical use of the technology in the field should promote well-being and minimize harm resulting from potential biases, according to a multi-society statement on the ethical use of AI in radiology.
The statement, which includes views from the American College of Radiology (ACR) and the European Society of Radiology, aims to set expectations about the use of AI in the field of radiology and inform a common interpretation of the ethical issues related to the use of the technology.
“This international multi-society statement is one step to help the radiology build an ethical framework to steer technological development, influence how stakeholders respond to and use AI and implement these tools to do right for patients,” Raymond Geis, M.D., a senior scientist at the ACR Data Science Institute, said in a statement to Inside Digital Health™.
The societies focused on three major areas while creating the statement: data, algorithms and practice.
Ethical issues related to data include privacy, who has access to data and why, notifying patients, data ownership and data accuracy and bias, Geis said. With algorithms, ethical issues involve bias, explainability and how to monitor them. Examples of ethical practice issues consist of how to use the decisions that AI makes, how to monitor AI, who and when to notify about its use and issues around regulation and codes of conduct, he added.
“The primary ethical goal of radiology AI is that it helps an individual patient when it’s applied to them,” Geis said. “Beyond that, AI that helps the common good is ethical.”
Examples of ethical use of AI in radiology include identifying patterns of best treatments for a whole population or identifying findings in radiology exams to discover a new disease, Geis added.
AI can provide benefits, but like most technology, it could harm patients.
“Ethically, you want to be sure that both the good and bad parts are distributed equally across subgroups of people,” he said. “Under-represented groups should get equal benefits and not get more than their share of harm.”
Unethical behavior could be built into an algorithm inadvertently or on purpose. Sometimes, the algorithm can perform and behave ethically, but the human expert might use the information in unethical ways, said Geis.
AI should respect human rights and freedoms and should be transparent and highly dependable, the statement said.
“Though the hype surrounding radiology AI is unrealistic and not helpful, we can see that AI will dramatically affect every part of radiology, as well as all of medicine,” he said.
To ensure ethical AI, tools need to be vetted properly by legitimate regulatory boards. The technology must also be monitored after implementation for unintended consequences and loss of quality.
The radiology community can develop ethics codes and best practices to guide the use of the technology to ensure privacy and safety for patients, said co-author Matthew Morgan, M.D., M.S., assocaite professor and director of IT and quality improvement in breast imaging from the University of Utah.
“Radiologists will remain ultimately responsible for patients’ well-being and will need to acquire new skills to manage this technology,” Geis concluded. “We have a duty to ensure that radiology AI remains human-centric, helps patients and the common good and evenly distributes both the benefits and harms that may occur.”
Get the best insights in digital health directly to your inbox.
Related
The ACR Launches AI-Lab to Empower Radiologists to Learn About AI