Researchers found that the answers generally matched responses from ophthalmologists in terms of accuracy. They stress that more study is needed.
For good or ill, Americans have been turning to the internet for questions about healthcare, and many are now asking ChatGPT questions about medicine and treatment.
In a new study, researchers found ChatGPT did a solid job of answering questions about eye care. The findings were published Tuesday on Jama Network Open.
The researchers compared responses to 200 questions from an online advice forum. For the most part, researchers found that ChatGPT was accurate and in line with current medical guidance.
“Chatbot and human responses did not significantly differ in terms of presence of incorrect information, likelihood of causing harm, extent of harm, or agreement with perceived consensus in the medical community,” the authors wrote.
The human answers were written by nine board-certified ophthalmologists, with a median of 30.7 years of board certification.
A panel of eight ophthalmologists compared the answers written by ophthalmologists and the AI-produced answers. In the majority of cases, the panel was able to determine the chatbot responses from the answers provided by the doctors. Panelists assessed human-vs.-chatbot responses with 61% accuracy.
The ChatGPT responses tended to be longer, according to the study. But researchers said ChatGPT demonstrated “it could generate surprisingly coherent and correct answers to many ophthalmology questions, some of which were quite detailed and specialized.”
The researchers did see some significance in the study for ophthalmology and healthcare. They noted that as ChatGPT and other AI-enabled tools grow in popularity, it is important to assess if they are giving advice that’s accurate or could be harmful to patients.
“Regardless of whether such tools are officially endorsed by health care providers, patients are likely to turn to these chatbots for medical advice, as they already search for medical advice online,” the authors wrote.
The authors don’t suggest using ChatGPT or other AI tools to replace ophthalmologists, or physicians in general. But the authors suggest, “there may be a future in which they augment ophthalmologists’ work and provide support for patient education under appropriate supervision.”
While healthcare leaders and analysts see AI’s potential to change healthcare, patients still appear to have some concerns.
Six out of ten Americans (60%) said they would be uncomfortable if their doctor used AI in diagnosis of a disease or in developing a plan for treatment, according to a survey released Feb. 22 by the Pew Research Center. The survey found 39% said they would be comfortable with AI, while 1% had no answer.
The authors of the new study acknowledged that more research must be done to assess patient attitudes on AI-content generation and to ensure that answers are acceptable to patients. They also discussed the need to utilize chatbots in ways that are ethical and don’t harm patients.
Researchers from Stanford University led the study, and they worked with researchers from Kaiser Permanente, Brighton Vision Center, the University of Colorado and Vanderbilt Eye Institute.
More from Chief Healthcare Executive
The Data Book podcast: Justin Norden talks about ChatGPT and AI in healthcare
Artificial intelligence, healthcare and questions of legal liability