Medindia LOGIN REGISTER
Medindia

How Does ChatGPT Help Physicians Pick the Right Imaging Tests?

by Colleen Fleiss on Jun 25 2023 11:41 PM
Listen to this article
0:00/0:00

The study doesn’t argue that artificial intelligence is going to replace physicians in choosing imaging test but it does reduce the interpretation time.

How Does ChatGPT Help Physicians Pick the Right Imaging Tests?
ChatGPT was found to support the process of clinical decision-making, including when selecting the right radiological imaging tests for Breast cancer screening (1 Trusted Source
Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot

Go to source
).
The study was led by investigators from Mass General Brigham in the US, and published in the Journal of the American College of Radiology.

"In this scenario, ChatGPT's abilities were impressive," said corresponding author Marc D. Succi, associate chair of Innovation and Commercialisation at Mass General Brigham Radiology and executive director of the MESH Incubator.

Achieving Optimal Imaging Results with ChatGPT

"I see it acting like a bridge between the referring healthcare professional and the expert radiologist -- stepping in as a trained consultant to recommend the right imaging test at the point of care, without delay.

"This could reduce administrative time on both referring and consulting physicians in making these evidence-backed decisions, optimise workflow, reduce burnout, and reduce patient confusion and wait times," Succi said.

In the study, the researchers asked ChatGPT 3.5 and 4 to help them decide what imaging tests to use for 21 made-up patient scenarios involving the need for breast cancer screening or the reporting of breast pain using the appropriateness criteria.

They asked the AI in an open-ended way and by giving ChatGPT a list of options. They tested ChatGPT 3.5 as well as ChatGPT 4, a newer, more advanced version.

ChatGPT 4 outperformed 3.5, especially when given the available imaging options.

Advertisement
For example, when asked about breast cancer screenings, and given multiple choice imaging options, ChatGPT 3.5 answered an average of 88.9 percent of prompts correctly, and ChatGPT 4 got about 98.4 percent right.

"This study doesn't compare ChatGPT to existing radiologists because the existing gold standard is actually a set of guidelines from the American College of Radiology, which is the comparison we performed," Succi said.

Advertisement
Reference:
  1. Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot - (https://www.jacr.org/article/S1546-1440(23)00394-0/fulltext)

Source-IANS


Advertisement

Home

Consult

e-Book

Articles

News

Calculators

Drugs

Directories

Education