Home UVA Health study: AI promises to improve patient care, but human doctors remain necessary
Issues, Local

UVA Health study: AI promises to improve patient care, but human doctors remain necessary

Rebecca Barnabi
UVA Health Artificial intelligence
(© Zobacz więcej – stock.adobe.com)

With hospitals already deploying artificial intelligence to improve patient care, a new study has found that using Chat GPT Plus does not significantly improve the accuracy of doctors’ diagnoses when compared with the use of usual resources.

The study, from UVA Health’s Dr. Andrew S. Parsons and colleagues, enlisted 50 physicians in family medicine, internal medicine and emergency medicine to put Chat GPT Plus to the test.

Half were randomly assigned to use Chat GPT Plus to diagnose complex cases, while the other half relied on conventional methods such as medical reference sites (for example, UpToDate©) and Google. The researchers then compared the resulting diagnoses, finding that the accuracy across the two groups was similar.

That said, Chat GPT alone outperformed both groups, suggesting that it still holds promise for improving patient care. Physicians, however, will need more training and experience with the emerging technology to capitalize on its potential, the researchers concluded.

For now, they said, Chat GPT remains best used to augment, rather than replace, human physicians.

“Our study shows that AI alone can be an effective and powerful tool for diagnosis,” Parsons, who oversees the teaching of clinical skills to medical students at the UVA School of Medicine and co-leads the Clinical Reasoning Research Collaborative, said. “We were surprised to find that adding a human physician to the mix actually reduced diagnostic accuracy though improved efficiency. These results likely mean that we need formal training in how best to use AI.”

Chatbots called “large language models” that produce human-like responses are growing in popularity, and have shown impressive ability to take patient histories, communicate empathetically and even solve complex medical cases. But, for now, they still require the involvement of a human doctor.

Parsons and his colleagues were eager to determine how artificial intelligence can be used most effectively, so they launched a randomized, controlled trial at three leading-edge hospitals: UVA Health, Stanford and Harvard’s Beth Israel Deaconess Medical Center.

The participating doctors made diagnoses for “clinical vignettes” based on real-life patient-care cases. The case studies included details about patients’ histories, physical exams and lab test results. The researchers then scored the results and examined how quickly the two groups made their diagnoses.

The median diagnostic accuracy for the docs using Chat GPT Plus was 76.3 percent, while the results for the physicians using conventional approaches was 73.7 percent. The Chat GPT group members reached their diagnoses slightly more quickly overall — 519 seconds compared with 565 seconds.

The researchers were surprised at how well Chat GPT Plus alone performed, with a median diagnostic accuracy of more than 92 percent. They say the accuracy may reflect the prompts used in the study, suggesting that physicians likely will benefit from training on how to use prompts effectively. Alternately, they say, healthcare organizations could purchase predefined prompts to implement in clinical workflow and documentation.

The researchers also caution that artificial intelligence likely would fare less well in real life, where many other aspects of clinical reasoning come into play, especially in determining downstream effects of diagnoses and treatment decisions. They are urging additional studies to assess large language models’ abilities in those areas and are conducting a similar study on management decision-making.

“As AI becomes more embedded in healthcare, it’s essential to understand how we can leverage these tools to improve patient care and the physician experience,” Parsons said. “This study suggests there is much work to be done in terms of optimizing our partnership with AI in the clinical environment.”

Following up on the groundbreaking work, the four study sites have launched a bi-coastal AI evaluation network called ARiSE (AI Research and Science Evaluation) to further evaluate GenAI outputs in healthcare.

The researchers published their results in the scientific journal JAMA Network Open. Funding for the research was provided by the Gordon and Betty Moore Foundation.

Support AFP

Latest News

jim cornette podcast
Etc.

Jim Cornette shoots on former local wrestling promoter on his top-rated podcast

childhood education montessori
Local

Parent alleges that Staunton Montessori School covered up mold issue

The parent of now-former students at Staunton Montessori School reported repeated health issues being experienced by her children that she pinpointed to a mold problem after heavy rains last fall flooded a classroom.

uva baseball
Baseball

UVA Baseball: ‘Hoos sweep two from Radford, winning 11-1, 14-5

Virginia took care of business on Sunday, sweeping a doubleheader from Radford, taking Game 1 by an 11-1 score, and Game 2, also in blowout fashion, 14-5.

hearing loss Audiologist auditory test hearing clinic
Virginia

Hearing loss is common in Central Virginia, but many still wait to seek help

radio
Local, Politics

This Week in Rob Schilling: Highlights of the MAGA radio show that UVA Athletics props up

vdot road
Local

VDOT: Local road construction, maintenance for the week of May 4-8

lacrosse
Etc.

UVA Lacrosse: ‘Hoos boatrace North Carolina to win ACC Championship