Loading…

AI in the ED: Assessing the efficacy of GPT models vs. physicians in medical score calculation

Artificial Intelligence (AI) models like GPT-3.5 and GPT-4 have shown promise across various domains but remain underexplored in healthcare. Emergency Departments (ED) rely on established scoring systems, such as NIHSS and HEART score, to guide clinical decision-making. This study aims to evaluate t...

Full description

Saved in:
Bibliographic Details
Published in:The American journal of emergency medicine 2024-05, Vol.79, p.161-166
Main Authors: Haim, Gal Ben, Braun, Adi, Eden, Haggai, Burshtein, Livnat, Barash, Yiftach, Irony, Avinoah, Klang, Eyal
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Artificial Intelligence (AI) models like GPT-3.5 and GPT-4 have shown promise across various domains but remain underexplored in healthcare. Emergency Departments (ED) rely on established scoring systems, such as NIHSS and HEART score, to guide clinical decision-making. This study aims to evaluate the proficiency of GPT-3.5 and GPT-4 against experienced ED physicians in calculating five commonly used medical scores. This retrospective study analyzed data from 150 patients who visited the ED over one week. Both AI models and two human physicians were tasked with calculating scores for NIH Stroke Scale, Canadian Syncope Risk Score, Alvarado Score for Acute Appendicitis, Canadian CT Head Rule, and HEART Score. Cohen's Kappa statistic and AUC values were used to assess inter-rater agreement and predictive performance, respectively. The highest level of agreement was observed between the human physicians (Kappa = 0.681), while GPT-4 also showed moderate to substantial agreement with them (Kappa values of 0.473 and 0.576). GPT-3.5 had the lowest agreement with human scorers. These results highlight the superior predictive performance of human expertise over the currently available automated systems for this specific medical outcome. Human physicians achieved a higher ROC-AUC on 3 of the 5 scores, but none of the differences were statistically significant. While AI models demonstrated some level of concordance with human expertise, they fell short in emulating the complex clinical judgments that physicians make. The study suggests that current AI models may serve as supplementary tools but are not ready to replace human expertise in high-stakes settings like the ED. Further research is needed to explore the capabilities and limitations of AI in emergency medicine.
ISSN:0735-6757
1532-8171
1532-8171
DOI:10.1016/j.ajem.2024.02.016