ChatGPT could help diagnose patients in future emergency rooms, according to a new pilot study that looked at how the big language model could be used to support doctors.
The research, published in the Annals of Emergency Medicine, reveals that artificial intelligence (AI) chatbot diagnoses patients as well as experienced doctors. This technology will be presented at the European Congress of Emergency Medicine which begins this weekend.
Researchers at Jeroen Bosch Hospital in the Netherlands fed doctors’ notes and anonymous information about 30 patients, including exams, symptoms, and lab results, into two versions of ChatGPT.
They discovered an overlap of around 60% between the emergency doctors’ list of possible diagnoses and the chatbot.
“We found that ChatGPT worked well in generating a list of probable diagnoses and suggesting the most likely option,” says study author Hidde ten Berg, a member of the emergency department at Jeroen Bosch Hospital, in a statement.
“We also found a large overlap with the doctors’ lists of likely diagnoses. Simply put, this indicates that ChatGPT was able to suggest medical diagnoses much like a human doctor would.”
Emergency physicians had the correct diagnosis in their first five lists 87% of the time, while ChatGPT version 3.5 had the correct diagnosis in their list 97% of the time, compared to 87% in ChatGPT version 4.0. ChatGPT.
This tool is not a medical device
The research was not used to impact patients, but to test the potential or feasibility of using generative AI for diagnosis.
But it’s not something that’s going to be available for clinical use yet.
“One of the problems, at least in Europe… is that the legislation is very harsh,” study author Steef Kurstjens from the Department of Clinical Chemistry and Hematology at Jeroen Bosch Hospital told Euronews Next.
“So these types of tools are not medical devices. So if you use them to affect patient care, you are using a tool that is not a medical device as a medical device, and that is not allowed. So, I think what is necessary [aprobar] new legislation if you want to use this,” he added.
Patient data privacy is another major concern surrounding the use of generative AI in healthcare, and some experts are urging policymakers to try to reduce any potential risks through regulation.
Ultimately, AI could help healthcare professionals
According to experts, one of the most interesting uses of AI in healthcare could be saving doctors’ time, helping them make diagnoses or easing part of the administrative burden on the healthcare system.
“As a support tool, it could help doctors create a list [de diagnósticos] or getting ideas that they would not have thought of themselves, or for less experienced doctors who are still in training, it could be a support tool for their daily care,” Kurstjens explains to Euronews Next.
“I think the future of large medical-related language models that are trained with medical data is really interesting, like Med-PaLM. It’s very interesting to see how they perform and if they outperform ChatGPT,” he adds.
The researchers also suggested the possibility of saving time and reducing waits in emergency departments.
Youri Yordanov, of the emergency department at St. Antoine Hospital in Paris and chair of the abstract committee at this year’s emergency medicine congress, said in a statement that doctors are a long way from using ChatGPT clinically.
Yordanov, who was not involved in the research, added that it is important to study the technology and see how it could help doctors and patients.
“People who have to go to the emergency room want to be seen as soon as possible and for their problem to be diagnosed and treated correctly,” he says.
“I hope that research will continue in this field and that it can ultimately support the work of busy healthcare professionals,” he concludes.