Loading…

Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients

Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performan...

Full description

Saved in:
Bibliographic Details
Published in:Artificial intelligence in medicine 2024-03, Vol.149, p.102785, Article 102785
Main Authors: Yang, Meicheng, Liu, Songqiao, Hao, Tong, Ma, Caiyun, Chen, Hui, Li, Yuwen, Wu, Changde, Xie, Jianfeng, Qiu, Haibo, Li, Jianqing, Yang, Yi, Liu, Chengyu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c311t-47444d60ae1570d9ddb73e9e9954742068a74e41b99c79e1b5c692827058e7d3
container_end_page
container_issue
container_start_page 102785
container_title Artificial intelligence in medicine
container_volume 149
creator Yang, Meicheng
Liu, Songqiao
Hao, Tong
Ma, Caiyun
Chen, Hui
Li, Yuwen
Wu, Changde
Xie, Jianfeng
Qiu, Haibo
Li, Jianqing
Yang, Yi
Liu, Chengyu
description Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performance internally and externally in critically ill patients. A total of 21,163 patients' electronic health records sourced from Beth Israel Deaconess Medical Center (BIDMC) were first included in building the model. Two external validation populations included 3025 patients from the Philips eICU Research Institute and 2625 patients from Zhongda Hospital Southeast University. A total of 152 intelligently engineered predictors were extracted on an hourly basis. The prediction model referred to as DeepAKI was designed with the basic framework of squeeze-and-excitation networks with dilated causal convolution embedded. The integrated gradients method was utilized to explain the prediction model. When performed on the internal validation set (3175 [15 %] patients from BIDMC) and the two external validation sets, DeepAKI obtained the area under the curve of 0.799 (95 % CI 0.791–0.806), 0.763 (95 % CI 0.755–0.771) and 0.676 (95 % CI 0.668–0.684) for continuousAKI prediction, respectively. For model interpretability, clinically relevant important variables contributing to the model prediction were informed, and individual explanations along the timeline were explored to show how AKI risk arose. The potential threats to generalisability in deep learning-based models when deployed across health systems in real-world settings were analyzed. •A deep neural network developed for continuously predicting the 24-hour AKI risk•The model interpretability help the understanding of AKI risk at individual patient levels•Generalisability in AI-based models is important when deployed in real-world settings
doi_str_mv 10.1016/j.artmed.2024.102785
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2955265941</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0933365724000277</els_id><sourcerecordid>2955265941</sourcerecordid><originalsourceid>FETCH-LOGICAL-c311t-47444d60ae1570d9ddb73e9e9954742068a74e41b99c79e1b5c692827058e7d3</originalsourceid><addsrcrecordid>eNp9kEtvHCEQhFGUKF4__kEUccxl1sDAMFwiRc7Dlizl4jtioFdizcIEmI32nD8eNmNfc2qpq6oLPoQ-ULKlhA63-63J9QBuywjjbcXkKN6gDR1l37FxIG_Rhqi-7_pByAt0WcqeECI5Hd6ji37kA2Oj2KA_X-EIIc0HiBWb6PDRBO9M9SnitMMGO4AZ-1ghzxmqmQLgCPV3ys94lzK2KVYfl7QUbOxSAT97F-HUEvsln3DLOG__XfMR2-yrtyaEpoeA51bTass1erczocDNy7xCT9-_Pd3dd48_fzzcfXnsbE9p7bjknLuBGKBCEqecm2QPCpQSTWJkGI3kwOmklJUK6CTsoNjIJBEjSNdfoU_r2TmnXwuUqg--WAjBRGjv10wJwQahOG1WvlptTqVk2Ok5-4PJJ02JPtPXe73S12f6eqXfYh9fGpbprL2GXnE3w-fVAO2bRw9ZF9sQ2AYpg63aJf__hr_xCpoU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2955265941</pqid></control><display><type>article</type><title>Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients</title><source>Elsevier:Jisc Collections:Elsevier Read and Publish Agreement 2022-2024:Freedom Collection (Reading list)</source><creator>Yang, Meicheng ; Liu, Songqiao ; Hao, Tong ; Ma, Caiyun ; Chen, Hui ; Li, Yuwen ; Wu, Changde ; Xie, Jianfeng ; Qiu, Haibo ; Li, Jianqing ; Yang, Yi ; Liu, Chengyu</creator><creatorcontrib>Yang, Meicheng ; Liu, Songqiao ; Hao, Tong ; Ma, Caiyun ; Chen, Hui ; Li, Yuwen ; Wu, Changde ; Xie, Jianfeng ; Qiu, Haibo ; Li, Jianqing ; Yang, Yi ; Liu, Chengyu</creatorcontrib><description>Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performance internally and externally in critically ill patients. A total of 21,163 patients' electronic health records sourced from Beth Israel Deaconess Medical Center (BIDMC) were first included in building the model. Two external validation populations included 3025 patients from the Philips eICU Research Institute and 2625 patients from Zhongda Hospital Southeast University. A total of 152 intelligently engineered predictors were extracted on an hourly basis. The prediction model referred to as DeepAKI was designed with the basic framework of squeeze-and-excitation networks with dilated causal convolution embedded. The integrated gradients method was utilized to explain the prediction model. When performed on the internal validation set (3175 [15 %] patients from BIDMC) and the two external validation sets, DeepAKI obtained the area under the curve of 0.799 (95 % CI 0.791–0.806), 0.763 (95 % CI 0.755–0.771) and 0.676 (95 % CI 0.668–0.684) for continuousAKI prediction, respectively. For model interpretability, clinically relevant important variables contributing to the model prediction were informed, and individual explanations along the timeline were explored to show how AKI risk arose. The potential threats to generalisability in deep learning-based models when deployed across health systems in real-world settings were analyzed. •A deep neural network developed for continuously predicting the 24-hour AKI risk•The model interpretability help the understanding of AKI risk at individual patient levels•Generalisability in AI-based models is important when deployed in real-world settings</description><identifier>ISSN: 0933-3657</identifier><identifier>ISSN: 1873-2860</identifier><identifier>EISSN: 1873-2860</identifier><identifier>DOI: 10.1016/j.artmed.2024.102785</identifier><identifier>PMID: 38462285</identifier><language>eng</language><publisher>Netherlands: Elsevier B.V</publisher><subject>Acute kidney injury ; Critical care ; External validation ; Model interpretation ; Predictive model</subject><ispartof>Artificial intelligence in medicine, 2024-03, Vol.149, p.102785, Article 102785</ispartof><rights>2024</rights><rights>Copyright © 2024. Published by Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c311t-47444d60ae1570d9ddb73e9e9954742068a74e41b99c79e1b5c692827058e7d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38462285$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Meicheng</creatorcontrib><creatorcontrib>Liu, Songqiao</creatorcontrib><creatorcontrib>Hao, Tong</creatorcontrib><creatorcontrib>Ma, Caiyun</creatorcontrib><creatorcontrib>Chen, Hui</creatorcontrib><creatorcontrib>Li, Yuwen</creatorcontrib><creatorcontrib>Wu, Changde</creatorcontrib><creatorcontrib>Xie, Jianfeng</creatorcontrib><creatorcontrib>Qiu, Haibo</creatorcontrib><creatorcontrib>Li, Jianqing</creatorcontrib><creatorcontrib>Yang, Yi</creatorcontrib><creatorcontrib>Liu, Chengyu</creatorcontrib><title>Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients</title><title>Artificial intelligence in medicine</title><addtitle>Artif Intell Med</addtitle><description>Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performance internally and externally in critically ill patients. A total of 21,163 patients' electronic health records sourced from Beth Israel Deaconess Medical Center (BIDMC) were first included in building the model. Two external validation populations included 3025 patients from the Philips eICU Research Institute and 2625 patients from Zhongda Hospital Southeast University. A total of 152 intelligently engineered predictors were extracted on an hourly basis. The prediction model referred to as DeepAKI was designed with the basic framework of squeeze-and-excitation networks with dilated causal convolution embedded. The integrated gradients method was utilized to explain the prediction model. When performed on the internal validation set (3175 [15 %] patients from BIDMC) and the two external validation sets, DeepAKI obtained the area under the curve of 0.799 (95 % CI 0.791–0.806), 0.763 (95 % CI 0.755–0.771) and 0.676 (95 % CI 0.668–0.684) for continuousAKI prediction, respectively. For model interpretability, clinically relevant important variables contributing to the model prediction were informed, and individual explanations along the timeline were explored to show how AKI risk arose. The potential threats to generalisability in deep learning-based models when deployed across health systems in real-world settings were analyzed. •A deep neural network developed for continuously predicting the 24-hour AKI risk•The model interpretability help the understanding of AKI risk at individual patient levels•Generalisability in AI-based models is important when deployed in real-world settings</description><subject>Acute kidney injury</subject><subject>Critical care</subject><subject>External validation</subject><subject>Model interpretation</subject><subject>Predictive model</subject><issn>0933-3657</issn><issn>1873-2860</issn><issn>1873-2860</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kEtvHCEQhFGUKF4__kEUccxl1sDAMFwiRc7Dlizl4jtioFdizcIEmI32nD8eNmNfc2qpq6oLPoQ-ULKlhA63-63J9QBuywjjbcXkKN6gDR1l37FxIG_Rhqi-7_pByAt0WcqeECI5Hd6ji37kA2Oj2KA_X-EIIc0HiBWb6PDRBO9M9SnitMMGO4AZ-1ghzxmqmQLgCPV3ys94lzK2KVYfl7QUbOxSAT97F-HUEvsln3DLOG__XfMR2-yrtyaEpoeA51bTass1erczocDNy7xCT9-_Pd3dd48_fzzcfXnsbE9p7bjknLuBGKBCEqecm2QPCpQSTWJkGI3kwOmklJUK6CTsoNjIJBEjSNdfoU_r2TmnXwuUqg--WAjBRGjv10wJwQahOG1WvlptTqVk2Ok5-4PJJ02JPtPXe73S12f6eqXfYh9fGpbprL2GXnE3w-fVAO2bRw9ZF9sQ2AYpg63aJf__hr_xCpoU</recordid><startdate>202403</startdate><enddate>202403</enddate><creator>Yang, Meicheng</creator><creator>Liu, Songqiao</creator><creator>Hao, Tong</creator><creator>Ma, Caiyun</creator><creator>Chen, Hui</creator><creator>Li, Yuwen</creator><creator>Wu, Changde</creator><creator>Xie, Jianfeng</creator><creator>Qiu, Haibo</creator><creator>Li, Jianqing</creator><creator>Yang, Yi</creator><creator>Liu, Chengyu</creator><general>Elsevier B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202403</creationdate><title>Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients</title><author>Yang, Meicheng ; Liu, Songqiao ; Hao, Tong ; Ma, Caiyun ; Chen, Hui ; Li, Yuwen ; Wu, Changde ; Xie, Jianfeng ; Qiu, Haibo ; Li, Jianqing ; Yang, Yi ; Liu, Chengyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c311t-47444d60ae1570d9ddb73e9e9954742068a74e41b99c79e1b5c692827058e7d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Acute kidney injury</topic><topic>Critical care</topic><topic>External validation</topic><topic>Model interpretation</topic><topic>Predictive model</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Meicheng</creatorcontrib><creatorcontrib>Liu, Songqiao</creatorcontrib><creatorcontrib>Hao, Tong</creatorcontrib><creatorcontrib>Ma, Caiyun</creatorcontrib><creatorcontrib>Chen, Hui</creatorcontrib><creatorcontrib>Li, Yuwen</creatorcontrib><creatorcontrib>Wu, Changde</creatorcontrib><creatorcontrib>Xie, Jianfeng</creatorcontrib><creatorcontrib>Qiu, Haibo</creatorcontrib><creatorcontrib>Li, Jianqing</creatorcontrib><creatorcontrib>Yang, Yi</creatorcontrib><creatorcontrib>Liu, Chengyu</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Artificial intelligence in medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Meicheng</au><au>Liu, Songqiao</au><au>Hao, Tong</au><au>Ma, Caiyun</au><au>Chen, Hui</au><au>Li, Yuwen</au><au>Wu, Changde</au><au>Xie, Jianfeng</au><au>Qiu, Haibo</au><au>Li, Jianqing</au><au>Yang, Yi</au><au>Liu, Chengyu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients</atitle><jtitle>Artificial intelligence in medicine</jtitle><addtitle>Artif Intell Med</addtitle><date>2024-03</date><risdate>2024</risdate><volume>149</volume><spage>102785</spage><pages>102785-</pages><artnum>102785</artnum><issn>0933-3657</issn><issn>1873-2860</issn><eissn>1873-2860</eissn><abstract>Early detection of acute kidney injury (AKI) may provide a crucial window of opportunity to prevent further injury, which helps improve clinical outcomes. This study aimed to develop a deep interpretable network for continuously predicting the 24-hour AKI risk in real-time and evaluate its performance internally and externally in critically ill patients. A total of 21,163 patients' electronic health records sourced from Beth Israel Deaconess Medical Center (BIDMC) were first included in building the model. Two external validation populations included 3025 patients from the Philips eICU Research Institute and 2625 patients from Zhongda Hospital Southeast University. A total of 152 intelligently engineered predictors were extracted on an hourly basis. The prediction model referred to as DeepAKI was designed with the basic framework of squeeze-and-excitation networks with dilated causal convolution embedded. The integrated gradients method was utilized to explain the prediction model. When performed on the internal validation set (3175 [15 %] patients from BIDMC) and the two external validation sets, DeepAKI obtained the area under the curve of 0.799 (95 % CI 0.791–0.806), 0.763 (95 % CI 0.755–0.771) and 0.676 (95 % CI 0.668–0.684) for continuousAKI prediction, respectively. For model interpretability, clinically relevant important variables contributing to the model prediction were informed, and individual explanations along the timeline were explored to show how AKI risk arose. The potential threats to generalisability in deep learning-based models when deployed across health systems in real-world settings were analyzed. •A deep neural network developed for continuously predicting the 24-hour AKI risk•The model interpretability help the understanding of AKI risk at individual patient levels•Generalisability in AI-based models is important when deployed in real-world settings</abstract><cop>Netherlands</cop><pub>Elsevier B.V</pub><pmid>38462285</pmid><doi>10.1016/j.artmed.2024.102785</doi></addata></record>
fulltext fulltext
identifier ISSN: 0933-3657
ispartof Artificial intelligence in medicine, 2024-03, Vol.149, p.102785, Article 102785
issn 0933-3657
1873-2860
1873-2860
language eng
recordid cdi_proquest_miscellaneous_2955265941
source Elsevier:Jisc Collections:Elsevier Read and Publish Agreement 2022-2024:Freedom Collection (Reading list)
subjects Acute kidney injury
Critical care
External validation
Model interpretation
Predictive model
title Development and validation of a deep interpretable network for continuous acute kidney injury prediction in critically ill patients
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T08%3A45%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Development%20and%20validation%20of%20a%20deep%20interpretable%20network%20for%20continuous%20acute%20kidney%20injury%20prediction%20in%20critically%20ill%20patients&rft.jtitle=Artificial%20intelligence%20in%20medicine&rft.au=Yang,%20Meicheng&rft.date=2024-03&rft.volume=149&rft.spage=102785&rft.pages=102785-&rft.artnum=102785&rft.issn=0933-3657&rft.eissn=1873-2860&rft_id=info:doi/10.1016/j.artmed.2024.102785&rft_dat=%3Cproquest_cross%3E2955265941%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c311t-47444d60ae1570d9ddb73e9e9954742068a74e41b99c79e1b5c692827058e7d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2955265941&rft_id=info:pmid/38462285&rfr_iscdi=true