Loading…

Should AI models be explainable to clinicians?

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comp...

Full description

Saved in:
Bibliographic Details
Published in:Critical care (London, England) England), 2024-09, Vol.28 (1), p.301, Article 301
Main Authors: Abgrall, Gwénolé, Holder, Andre L, Chelly Dagdia, Zaineb, Zeitouni, Karine, Monnet, Xavier
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c341t-b81edf6710db0f30fd11bd5ae4f3ddd21f510482766e6279c99ce464cbf2a3743
container_end_page
container_issue 1
container_start_page 301
container_title Critical care (London, England)
container_volume 28
creator Abgrall, Gwénolé
Holder, Andre L
Chelly Dagdia, Zaineb
Zeitouni, Karine
Monnet, Xavier
description In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.
doi_str_mv 10.1186/s13054-024-05005-y
format article
fullrecord <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11391805</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A808560306</galeid><sourcerecordid>A808560306</sourcerecordid><originalsourceid>FETCH-LOGICAL-c341t-b81edf6710db0f30fd11bd5ae4f3ddd21f510482766e6279c99ce464cbf2a3743</originalsourceid><addsrcrecordid>eNptkV1rFDEUhoNYbK3-AS9kwBt7Mes5-ZqZq7IUawsLvVDBu5DJRzeSmayT3eL-e7OdWmyREBJOnvc9h7yEvENYILbyU0YGgtdAyxYAot6_ICfIpawldD9eljuTvG4FE8fkdc4_AbBpJXtFjllHZYMNPSGLr-u0i7ZaXldDsi7mqneV-72JOoy6j67apsrEMAYT9JjP35Ajr2N2bx_OU_L98vO3i6t6dfPl-mK5qg3juK37Fp31pQXYHjwDbxF7K7TjnllrKXqBwFvaSOkkbTrTdcZxyU3vqWYNZ6fkfPbd7PrBWePG7aSj2kxh0NNeJR3U05cxrNVtulOIrMMWRHE4mx3Wz3RXy5U61IA3gEDpHRb240O3Kf3aubxVQ8jGxahHl3ZZsTIssE42tKAfZvRWR6fC6FNpbw64WrbQCgkMZKEW_6HKsm4IJo3Oh1J_IqCzwEwp58n5x5ER1CFsNYetStjqPmy1L6L3_37So-RvuuwP_9Kiiw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3104039672</pqid></control><display><type>article</type><title>Should AI models be explainable to clinicians?</title><source>NCBI_PubMed Central(免费)</source><source>Publicly Available Content Database</source><creator>Abgrall, Gwénolé ; Holder, Andre L ; Chelly Dagdia, Zaineb ; Zeitouni, Karine ; Monnet, Xavier</creator><creatorcontrib>Abgrall, Gwénolé ; Holder, Andre L ; Chelly Dagdia, Zaineb ; Zeitouni, Karine ; Monnet, Xavier</creatorcontrib><description>In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.</description><identifier>ISSN: 1364-8535</identifier><identifier>ISSN: 1466-609X</identifier><identifier>EISSN: 1466-609X</identifier><identifier>EISSN: 1364-8535</identifier><identifier>DOI: 10.1186/s13054-024-05005-y</identifier><identifier>PMID: 39267172</identifier><language>eng</language><publisher>England: BioMed Central Ltd</publisher><subject>Artificial intelligence ; Artificial Intelligence - standards ; Artificial Intelligence - trends ; Clinical Decision-Making - methods ; Computer Science ; Critical Care - methods ; Critical Care - standards ; Debate ; Decision-making ; Humans ; Life Sciences ; Physicians - standards ; Statistics</subject><ispartof>Critical care (London, England), 2024-09, Vol.28 (1), p.301, Article 301</ispartof><rights>2024. The Author(s).</rights><rights>COPYRIGHT 2024 BioMed Central Ltd.</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><rights>The Author(s) 2024 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c341t-b81edf6710db0f30fd11bd5ae4f3ddd21f510482766e6279c99ce464cbf2a3743</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391805/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391805/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,27924,27925,37013,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39267172$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://hal.science/hal-04701022$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Abgrall, Gwénolé</creatorcontrib><creatorcontrib>Holder, Andre L</creatorcontrib><creatorcontrib>Chelly Dagdia, Zaineb</creatorcontrib><creatorcontrib>Zeitouni, Karine</creatorcontrib><creatorcontrib>Monnet, Xavier</creatorcontrib><title>Should AI models be explainable to clinicians?</title><title>Critical care (London, England)</title><addtitle>Crit Care</addtitle><description>In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.</description><subject>Artificial intelligence</subject><subject>Artificial Intelligence - standards</subject><subject>Artificial Intelligence - trends</subject><subject>Clinical Decision-Making - methods</subject><subject>Computer Science</subject><subject>Critical Care - methods</subject><subject>Critical Care - standards</subject><subject>Debate</subject><subject>Decision-making</subject><subject>Humans</subject><subject>Life Sciences</subject><subject>Physicians - standards</subject><subject>Statistics</subject><issn>1364-8535</issn><issn>1466-609X</issn><issn>1466-609X</issn><issn>1364-8535</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNptkV1rFDEUhoNYbK3-AS9kwBt7Mes5-ZqZq7IUawsLvVDBu5DJRzeSmayT3eL-e7OdWmyREBJOnvc9h7yEvENYILbyU0YGgtdAyxYAot6_ICfIpawldD9eljuTvG4FE8fkdc4_AbBpJXtFjllHZYMNPSGLr-u0i7ZaXldDsi7mqneV-72JOoy6j67apsrEMAYT9JjP35Ajr2N2bx_OU_L98vO3i6t6dfPl-mK5qg3juK37Fp31pQXYHjwDbxF7K7TjnllrKXqBwFvaSOkkbTrTdcZxyU3vqWYNZ6fkfPbd7PrBWePG7aSj2kxh0NNeJR3U05cxrNVtulOIrMMWRHE4mx3Wz3RXy5U61IA3gEDpHRb240O3Kf3aubxVQ8jGxahHl3ZZsTIssE42tKAfZvRWR6fC6FNpbw64WrbQCgkMZKEW_6HKsm4IJo3Oh1J_IqCzwEwp58n5x5ER1CFsNYetStjqPmy1L6L3_37So-RvuuwP_9Kiiw</recordid><startdate>20240912</startdate><enddate>20240912</enddate><creator>Abgrall, Gwénolé</creator><creator>Holder, Andre L</creator><creator>Chelly Dagdia, Zaineb</creator><creator>Zeitouni, Karine</creator><creator>Monnet, Xavier</creator><general>BioMed Central Ltd</general><general>BioMed Central</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>1XC</scope><scope>VOOES</scope><scope>5PM</scope></search><sort><creationdate>20240912</creationdate><title>Should AI models be explainable to clinicians?</title><author>Abgrall, Gwénolé ; Holder, Andre L ; Chelly Dagdia, Zaineb ; Zeitouni, Karine ; Monnet, Xavier</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c341t-b81edf6710db0f30fd11bd5ae4f3ddd21f510482766e6279c99ce464cbf2a3743</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial intelligence</topic><topic>Artificial Intelligence - standards</topic><topic>Artificial Intelligence - trends</topic><topic>Clinical Decision-Making - methods</topic><topic>Computer Science</topic><topic>Critical Care - methods</topic><topic>Critical Care - standards</topic><topic>Debate</topic><topic>Decision-making</topic><topic>Humans</topic><topic>Life Sciences</topic><topic>Physicians - standards</topic><topic>Statistics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Abgrall, Gwénolé</creatorcontrib><creatorcontrib>Holder, Andre L</creatorcontrib><creatorcontrib>Chelly Dagdia, Zaineb</creatorcontrib><creatorcontrib>Zeitouni, Karine</creatorcontrib><creatorcontrib>Monnet, Xavier</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>Hyper Article en Ligne (HAL)</collection><collection>Hyper Article en Ligne (HAL) (Open Access)</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Critical care (London, England)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Abgrall, Gwénolé</au><au>Holder, Andre L</au><au>Chelly Dagdia, Zaineb</au><au>Zeitouni, Karine</au><au>Monnet, Xavier</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Should AI models be explainable to clinicians?</atitle><jtitle>Critical care (London, England)</jtitle><addtitle>Crit Care</addtitle><date>2024-09-12</date><risdate>2024</risdate><volume>28</volume><issue>1</issue><spage>301</spage><pages>301-</pages><artnum>301</artnum><issn>1364-8535</issn><issn>1466-609X</issn><eissn>1466-609X</eissn><eissn>1364-8535</eissn><abstract>In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.</abstract><cop>England</cop><pub>BioMed Central Ltd</pub><pmid>39267172</pmid><doi>10.1186/s13054-024-05005-y</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1364-8535
ispartof Critical care (London, England), 2024-09, Vol.28 (1), p.301, Article 301
issn 1364-8535
1466-609X
1466-609X
1364-8535
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_11391805
source NCBI_PubMed Central(免费); Publicly Available Content Database
subjects Artificial intelligence
Artificial Intelligence - standards
Artificial Intelligence - trends
Clinical Decision-Making - methods
Computer Science
Critical Care - methods
Critical Care - standards
Debate
Decision-making
Humans
Life Sciences
Physicians - standards
Statistics
title Should AI models be explainable to clinicians?
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T05%3A54%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Should%20AI%20models%20be%20explainable%20to%20clinicians?&rft.jtitle=Critical%20care%20(London,%20England)&rft.au=Abgrall,%20Gw%C3%A9nol%C3%A9&rft.date=2024-09-12&rft.volume=28&rft.issue=1&rft.spage=301&rft.pages=301-&rft.artnum=301&rft.issn=1364-8535&rft.eissn=1466-609X&rft_id=info:doi/10.1186/s13054-024-05005-y&rft_dat=%3Cgale_pubme%3EA808560306%3C/gale_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c341t-b81edf6710db0f30fd11bd5ae4f3ddd21f510482766e6279c99ce464cbf2a3743%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3104039672&rft_id=info:pmid/39267172&rft_galeid=A808560306&rfr_iscdi=true