The principles claim that AI developers are responsible for the implications of the technology.
Healthcare providers can follow a new set of artificial intelligence (AI) guidelines when developing and implementing the tech. The principles detail the implications if developers design faulty software and that patient safety is always the priority when using such technologies.
This week, the Partnership for AI, Telemedicine and Robotics in Healthcare (PATH), an alliance of stakeholders working to improve care and build efficiencies using advanced technologies, released the guidelines. PATH designed the principles to foster the safe, ethical and valuable use of AI in medicine, largely holding tech developers responsible for the implications of use and misuse of AI.
Automation, robotics and AI in healthcare have a lot of press around them, even though the industry is still in the early stages of development and deployment, Jonathan Linkous, co-founder and CEO of PATH, said in a statement to Inside Digital Health™.
But patients fear the technology due to warnings about “killer robots” and stories of “what-ifs,” he added.
“So, developing a set of principles around the development and implementation of AI in medicine was developed to set out certain guidelines that, if providers and developers agree to follow, would go a long way in alleviating those concerns,” Linkous said.
The principles are designed to assure patients and the public that the use of AI in healthcare will provide safe, equitable and high-quality services.
The guidelines include 12 principles, developed by members of PATH and other healthcare leaders:
Get the best insights in digital health directly to your inbox.
Related
Patients Report Mixed Views on Health-Tech and AI
Why Health Systems Should Build Their Own AI Models
Tackling the Misdiagnosis Epidemic with Human and Artificial Intelligence