Loading…

Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients

Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performan...

Full description

Saved in:
Bibliographic Details
Published in:Artificial intelligence in medicine 2024-03, Vol.149, p.102785, Article 102785
Main Authors: Yang, Meicheng, Liu, Songqiao, Hao, Tong, Ma, Caiyun, Chen, Hui, Li, Yuwen, Wu, Changde, Xie, Jianfeng, Qiu, Haibo, Li, Jianqing, Yang, Yi, Liu, Chengyu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performance internally and externally in critically ill patients. A total of 21,163 patients' electronic health records sourced from Beth Israel Deaconess Medical Center (BIDMC) were first included in building the model. Two external validation populations included 3025 patients from the Philips eICU Research Institute and 2625 patients from Zhongda Hospital Southeast University. A total of 152 intelligently engineered predictors were extracted on an hourly basis. The prediction model referred to as DeepAKI was designed with the basic framework of squeeze-and-excitation networks with dilated causal convolution embedded. The integrated gradients method was utilized to explain the prediction model. When performed on the internal validation set (3175 [15 %] patients from BIDMC) and the two external validation sets, DeepAKI obtained the area under the curve of 0.799 (95 % CI 0.791–0.806), 0.763 (95 % CI 0.755–0.771) and 0.676 (95 % CI 0.668–0.684) for continuousAKI prediction, respectively. For model interpretability, clinically relevant important variables contributing to the model prediction were informed, and individual explanations along the timeline were explored to show how AKI risk arose. The potential threats to generalisability in deep learning-based models when deployed across health systems in real-world settings were analyzed. •A deep neural network developed for continuously predicting the 24-hour AKI risk•The model interpretability help the understanding of AKI risk at individual patient levels•Generalisability in AI-based models is important when deployed in real-world settings
ISSN:0933-3657
1873-2860
1873-2860
DOI:10.1016/j.artmed.2024.102785