Loading…

Accuracy of a Commercial Large Language Model Protocol: Gage Repeatability and Reproducibility Study

The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simp...

Full description

Saved in:
Bibliographic Details
Published in:Journal of medical Internet research 2024-09, Vol.26 (12)
Main Authors: Franc, Jeffrey Micheal, Hertelendy, Attila Julius, Cheng, Lenard, Hata, Ryan, Verde, Manuela
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue 12
container_start_page
container_title Journal of medical Internet research
container_volume 26
creator Franc, Jeffrey Micheal
Hertelendy, Attila Julius
Cheng, Lenard
Hata, Ryan
Verde, Manuela
description The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. This study addresses the research problem: “Can ChatGPT adequately triage simulated disaster patients using the START protocol?” by measuring three outcomes: repeatability, reproducibility, and accuracy. Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence–guided tools should undergo rigorous statistical evaluation—using methods such as gage
doi_str_mv 10.2196/55648
format article
fullrecord <record><control><sourceid>gale</sourceid><recordid>TN_cdi_gale_infotracmisc_A810683348</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A810683348</galeid><sourcerecordid>A810683348</sourcerecordid><originalsourceid>FETCH-LOGICAL-g1018-6a3f681836abde9595c727b5aaa96ecf27d0d78b5b76a0cf210b9279ef26d0253</originalsourceid><addsrcrecordid>eNptj0tLAzEQgHNQsNb-h4AnD1uT7CabeCtFa6E-sHous3kskd2N7GbB_ntT7MGCDMzj45uBQWhGyZxRJW45F4U8QxNa5CorCi4u0OUwfBLCSKHoBJmF1mMPeo-Dw4CXoW1trz00eAN9bVPu6hFS8xSMbfBrH2LQobnDqwN8s18WIlS-8XGPoTMH0gczan9k2zia_RU6d9AMdnasU_TxcP--fMw2L6v1crHJakqozATkTkgqcwGVsYorrktWVhwAlLDasdIQU8qKV6UAkmZKKsVKZR0ThjCeT9H1790aGrvznQsxvdb6Qe8WkhIh87yQyZr_Y6UwtvU6dNb5xE8Wbk4WkhPtd6xhHIbdevv81_0BvKJxBw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Accuracy of a Commercial Large Language Model Protocol: Gage Repeatability and Reproducibility Study</title><source>Publicly Available Content Database</source><source>Social Science Premium Collection (Proquest) (PQ_SDU_P3)</source><source>Library &amp; Information Science Collection</source><source>PubMed Central</source><creator>Franc, Jeffrey Micheal ; Hertelendy, Attila Julius ; Cheng, Lenard ; Hata, Ryan ; Verde, Manuela</creator><creatorcontrib>Franc, Jeffrey Micheal ; Hertelendy, Attila Julius ; Cheng, Lenard ; Hata, Ryan ; Verde, Manuela</creatorcontrib><description>The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. This study addresses the research problem: “Can ChatGPT adequately triage simulated disaster patients using the START protocol?” by measuring three outcomes: repeatability, reproducibility, and accuracy. Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence–guided tools should undergo rigorous statistical evaluation—using methods such as gage R and R—before implementation into clinical settings.</description><identifier>ISSN: 1439-4456</identifier><identifier>DOI: 10.2196/55648</identifier><language>eng</language><publisher>Journal of Medical Internet Research</publisher><subject>Analysis ; Artificial intelligence ; Computational linguistics ; Disaster medicine ; Language processing ; Natural language interfaces ; Technology application ; Triage (Medicine)</subject><ispartof>Journal of medical Internet research, 2024-09, Vol.26 (12)</ispartof><rights>COPYRIGHT 2024 Journal of Medical Internet Research</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Franc, Jeffrey Micheal</creatorcontrib><creatorcontrib>Hertelendy, Attila Julius</creatorcontrib><creatorcontrib>Cheng, Lenard</creatorcontrib><creatorcontrib>Hata, Ryan</creatorcontrib><creatorcontrib>Verde, Manuela</creatorcontrib><title>Accuracy of a Commercial Large Language Model Protocol: Gage Repeatability and Reproducibility Study</title><title>Journal of medical Internet research</title><description>The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. This study addresses the research problem: “Can ChatGPT adequately triage simulated disaster patients using the START protocol?” by measuring three outcomes: repeatability, reproducibility, and accuracy. Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence–guided tools should undergo rigorous statistical evaluation—using methods such as gage R and R—before implementation into clinical settings.</description><subject>Analysis</subject><subject>Artificial intelligence</subject><subject>Computational linguistics</subject><subject>Disaster medicine</subject><subject>Language processing</subject><subject>Natural language interfaces</subject><subject>Technology application</subject><subject>Triage (Medicine)</subject><issn>1439-4456</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNptj0tLAzEQgHNQsNb-h4AnD1uT7CabeCtFa6E-sHous3kskd2N7GbB_ntT7MGCDMzj45uBQWhGyZxRJW45F4U8QxNa5CorCi4u0OUwfBLCSKHoBJmF1mMPeo-Dw4CXoW1trz00eAN9bVPu6hFS8xSMbfBrH2LQobnDqwN8s18WIlS-8XGPoTMH0gczan9k2zia_RU6d9AMdnasU_TxcP--fMw2L6v1crHJakqozATkTkgqcwGVsYorrktWVhwAlLDasdIQU8qKV6UAkmZKKsVKZR0ThjCeT9H1790aGrvznQsxvdb6Qe8WkhIh87yQyZr_Y6UwtvU6dNb5xE8Wbk4WkhPtd6xhHIbdevv81_0BvKJxBw</recordid><startdate>20240930</startdate><enddate>20240930</enddate><creator>Franc, Jeffrey Micheal</creator><creator>Hertelendy, Attila Julius</creator><creator>Cheng, Lenard</creator><creator>Hata, Ryan</creator><creator>Verde, Manuela</creator><general>Journal of Medical Internet Research</general><scope>ISN</scope></search><sort><creationdate>20240930</creationdate><title>Accuracy of a Commercial Large Language Model Protocol: Gage Repeatability and Reproducibility Study</title><author>Franc, Jeffrey Micheal ; Hertelendy, Attila Julius ; Cheng, Lenard ; Hata, Ryan ; Verde, Manuela</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-g1018-6a3f681836abde9595c727b5aaa96ecf27d0d78b5b76a0cf210b9279ef26d0253</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Analysis</topic><topic>Artificial intelligence</topic><topic>Computational linguistics</topic><topic>Disaster medicine</topic><topic>Language processing</topic><topic>Natural language interfaces</topic><topic>Technology application</topic><topic>Triage (Medicine)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Franc, Jeffrey Micheal</creatorcontrib><creatorcontrib>Hertelendy, Attila Julius</creatorcontrib><creatorcontrib>Cheng, Lenard</creatorcontrib><creatorcontrib>Hata, Ryan</creatorcontrib><creatorcontrib>Verde, Manuela</creatorcontrib><collection>Gale In Context: Canada</collection><jtitle>Journal of medical Internet research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Franc, Jeffrey Micheal</au><au>Hertelendy, Attila Julius</au><au>Cheng, Lenard</au><au>Hata, Ryan</au><au>Verde, Manuela</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Accuracy of a Commercial Large Language Model Protocol: Gage Repeatability and Reproducibility Study</atitle><jtitle>Journal of medical Internet research</jtitle><date>2024-09-30</date><risdate>2024</risdate><volume>26</volume><issue>12</issue><issn>1439-4456</issn><abstract>The release of ChatGPT (OpenAI) in November 2022 drastically reduced the barrier to using artificial intelligence by allowing a simple web-based text interface to a large language model (LLM). One use case where ChatGPT could be useful is in triaging patients at the site of a disaster using the Simple Triage and Rapid Treatment (START) protocol. However, LLMs experience several common errors including hallucinations (also called confabulations) and prompt dependency. This study addresses the research problem: “Can ChatGPT adequately triage simulated disaster patients using the START protocol?” by measuring three outcomes: repeatability, reproducibility, and accuracy. Nine prompts were developed by 5 disaster medicine physicians. A Python script queried ChatGPT Version 4 for each prompt combined with 391 validated simulated patient vignettes. Ten repetitions of each combination were performed for a total of 35,190 simulated triages. A reference standard START triage code for each simulated case was assigned by 2 disaster medicine specialists (JMF and MV), with a third specialist (LC) added if the first two did not agree. Results were evaluated using a gage repeatability and reproducibility study (gage R and R). Repeatability was defined as variation due to repeated use of the same prompt. Reproducibility was defined as variation due to the use of different prompts on the same patient vignette. Accuracy was defined as agreement with the reference standard. Although 35,102 (99.7%) queries returned a valid START score, there was considerable variability. Repeatability (use of the same prompt repeatedly) was 14% of the overall variation. Reproducibility (use of different prompts) was 4.1% of the overall variation. The accuracy of ChatGPT for START was 63.9% with a 32.9% overtriage rate and a 3.1% undertriage rate. Accuracy varied by prompt with a maximum of 71.8% and a minimum of 46.7%. This study indicates that ChatGPT version 4 is insufficient to triage simulated disaster patients via the START protocol. It demonstrated suboptimal repeatability and reproducibility. The overall accuracy of triage was only 63.9%. Health care professionals are advised to exercise caution while using commercial LLMs for vital medical determinations, given that these tools may commonly produce inaccurate data, colloquially referred to as hallucinations or confabulations. Artificial intelligence–guided tools should undergo rigorous statistical evaluation—using methods such as gage R and R—before implementation into clinical settings.</abstract><pub>Journal of Medical Internet Research</pub><doi>10.2196/55648</doi></addata></record>
fulltext fulltext
identifier ISSN: 1439-4456
ispartof Journal of medical Internet research, 2024-09, Vol.26 (12)
issn 1439-4456
language eng
recordid cdi_gale_infotracmisc_A810683348
source Publicly Available Content Database; Social Science Premium Collection (Proquest) (PQ_SDU_P3); Library & Information Science Collection; PubMed Central
subjects Analysis
Artificial intelligence
Computational linguistics
Disaster medicine
Language processing
Natural language interfaces
Technology application
Triage (Medicine)
title Accuracy of a Commercial Large Language Model Protocol: Gage Repeatability and Reproducibility Study
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T16%3A14%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Accuracy%20of%20a%20Commercial%20Large%20Language%20Model%20Protocol:%20Gage%20Repeatability%20and%20Reproducibility%20Study&rft.jtitle=Journal%20of%20medical%20Internet%20research&rft.au=Franc,%20Jeffrey%20Micheal&rft.date=2024-09-30&rft.volume=26&rft.issue=12&rft.issn=1439-4456&rft_id=info:doi/10.2196/55648&rft_dat=%3Cgale%3EA810683348%3C/gale%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-g1018-6a3f681836abde9595c727b5aaa96ecf27d0d78b5b76a0cf210b9279ef26d0253%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_galeid=A810683348&rfr_iscdi=true