Loading…
Gastric Emptying Scintigraphy Protocol Optimization Using Machine Learning for the Detection of Delayed Gastric Emptying
The purpose of this study is to examine the feasibility of a machine learning (ML) system for optimizing a gastric emptying scintigraphy (GES) protocol for the detection of delayed gastric emptying (GE), which is considered a primary indication for the diagnosis of gastroparesis. An ML model was dev...
Saved in:
Published in: | Diagnostics (Basel) 2024-06, Vol.14 (12), p.1240 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The purpose of this study is to examine the feasibility of a machine learning (ML) system for optimizing a gastric emptying scintigraphy (GES) protocol for the detection of delayed gastric emptying (GE), which is considered a primary indication for the diagnosis of gastroparesis.
An ML model was developed using the JADBio AutoML artificial intelligence (AI) platform. This model employs the percent GE at various imaging time points following the ingestion of a standardized radiolabeled meal to predict normal versus delayed GE at the conclusion of the 4 h GES study. The model was trained and tested on a cohort of 1002 patients who underwent GES using a 70/30 stratified split ratio for training vs. testing. The ML software automated the generation of optimal predictive models by employing a combination of data preprocessing, appropriate feature selection, and predictive modeling analysis algorithms.
The area under the curve (AUC) of the receiver operating characteristic (ROC) curve was employed to evaluate the predictive modeling performance. Several models were developed using different combinations of imaging time points as input features and methodologies to achieve optimal output. By using GE values at time points 0.5 h, 1 h, 1.5 h, 2 h, and 2.5 h as input predictors of the 4 h outcome, the analysis produced an AUC of 90.7% and a balanced accuracy (BA) of 80.0% on the test set. This performance was comparable to the training set results (AUC = 91.5%, BA = 84.7%) within the 95% confidence interval (CI), demonstrating a robust predictive capability. Through feature selection, it was discovered that the 2.5 h GE value alone was statistically significant enough to predict the 4 h outcome independently, with a slightly increased test set performance (AUC = 92.4%, BA = 83.3%), thus emphasizing its dominance as the primary predictor for delayed GE. ROC analysis was also performed for single time imaging points at 1 h and 2 h to assess their independent predictiveness of the 4 h outcome. Furthermore, the ML model was tested for its ability to predict "flipping" cases with normal GE at 1 h and 2 h that became abnormal with delayed GE at 4 h.
An AI/ML model was designed and trained for predicting delayed GE using a limited number of imaging time points in a 4 h GES clinical protocol. This study demonstrates the feasibility of employing ML for GES optimization in the detection of delayed GE and potentially shortening the protocol's time length without compromising d |
---|---|
ISSN: | 2075-4418 2075-4418 |
DOI: | 10.3390/diagnostics14121240 |