Study shows ChatGPT can accurately analyze medical charts for clinical research, other applications

by



Editors’ notes

This article has been reviewed according to Science X’s
editorial process
and policies.
Editors have highlighted
the following attributes while ensuring the content’s credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

by UT Southwestern Medical Center

findings, published in npj Digital Medicine, could significantly accelerate clinical research and lead to new innovations in computerized clinical decision-making aids.

“By transforming oceans of free-text health care data into structured knowledge, this work paves the way for leveraging to derive insights, improve clinical decision-making, and ultimately enhance ,” said study leader Yang Xie, Ph.D., Professor in the Peter O’Donnell Jr. School of Public Health and the Lyda Hill Department of Bioinformatics at UT Southwestern.

Dr. Xie is also Associate Dean of Data Sciences at UT Southwestern Medical School, Director of the Quantitative Biomedical Research Center, and a member of the Harold C. Simmons Comprehensive Cancer Center.

Much of the research in the Xie Lab focuses on developing and using data science and AI tools to improve and health care. She and her colleagues wondered whether ChatGPT might speed the process of analyzing clinical notes—the memos physicians write to document patients’ visits, diagnoses, and statuses as part of their medical record—to find relevant data for clinical research and other uses.

Clinical notes are a treasure trove of information, Dr. Xie explained; however, because they are written in free text, extracting structured data typically involves having a trained medical professional read and annotate them. This process requires a huge investment of time and often resources—and can also introduce human bias.

Existing programs that use natural language processing require extensive human annotation and model training. As a result, clinical notes are largely underused for research purposes.

To determine whether ChatGPT could convert clinical notes to structured data, Dr. Xie and her colleagues had it analyze more than 700 sets of pathology notes for to find the major features of primary tumors, whether were involved, and the cancer stage and subtype.

Overall, Dr. Xie said, the average accuracy of ChatGPT to make these determinations was 89%, based on reviews by human readers.

Their analysis took several weeks of full-time work compared with the few days it took to fine-tune data extraction from the ChatGPT model. This accuracy was significantly better than other traditional processing methods tested for this use.

To test whether this approach is applicable to other diseases, Dr. Xie and her colleagues used ChatGPT to extract information about cancer grade and margin status from 191 clinical notes on patients from Children’s Health with osteosarcoma, the most common type of bone cancer in children and adolescents. Here, ChatGPT returned information with nearly 99% accuracy on grade and 100% accuracy on margin status.

Dr. Xie noted that the results were strongly influenced by what prompts ChatGPT was given to perform each task—a phenomenon called prompt engineering. Providing multiple options to choose from, giving examples of appropriate responses, and directing ChatGPT to rely on evidence to draw conclusions improved its performance.

She added that using ChatGPT or other large language models to extract structured data from could not only speed but also help clinical trial enrollment by matching patients’ information to clinical trial protocols. However, she said, ChatGPT won’t replace the need for human physicians.

“Even though this technology is an extremely promising way to save time and effort, we should always use it with caution. Rigorous and continuous evaluation is very important,” Dr. Xie said.

More information:
Jingwei Huang et al, A critical assessment of using ChatGPT for extracting structured data from clinical notes, npj Digital Medicine (2024). DOI: 10.1038/s41746-024-01079-8

Citation:
Study shows ChatGPT can accurately analyze medical charts for clinical research, other applications (2024, May 13)
retrieved 15 June 2024
from https://medicalxpress.com/news/2024-05-chatgpt-accurately-medical-clinical-applications.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Microrobots made of algae carry chemo directly to lung tumors, improving cancer treatment

1 hour ago

A new look at why old age is linked to severe, even fatal COVID

2 hours ago

Gonadal function in male mice disrupted by prenatal risk factors

10 hours ago

Scientists solve decades long mystery of NLRC5 sensor function in cell death

10 hours ago

Nano-immunotherapy developed to improve lung cancer treatment

23 hours ago

Lower risk of cesarean births seen in mothers after COVID vaccination

Jun 14, 2024

Genetic-based guidance reduces alcohol consumption among young adults, study finds

Jun 14, 2024

Infectious H5N1 influenza virus in raw milk rapidly declines with heat treatment, study shows

Jun 14, 2024

Bird flu is highly lethal to some animals, but not to others. Scientists want to know why

Jun 14, 2024

Study reveals novel immune-based biomarker helps detect ovarian cancer years before conventional diagnosis

Jun 14, 2024

Read More

related posts

H2Nation hopes to serve as a conduit between those who produce, develop, design, or sell hydrogen or renewable energy devices and products.

Newsletter

Subscribe to our Newsletter for new posts. Stay updated from

H2Nation!

Laest News

@2023 – All Right Reserved. H2Nation Magazine 

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00