Loading…

A review of Explainable Artificial Intelligence in healthcare

•Emphasizes the need for transparency to build healthcare professionals' trust in AI systems.•Addresses the critical need for explainability due to potential high-impact consequences of AI errors in healthcare.•Categorizes XAI methods into six groups for healthcare research: feature-oriented, g...

Full description

Saved in:
Bibliographic Details
Published in:Computers & electrical engineering 2024-08, Vol.118, p.109370, Article 109370
Main Authors: Sadeghi, Zahra, Alizadehsani, Roohallah, CIFCI, Mehmet Akif, Kausar, Samina, Rehman, Rizwan, Mahanta, Priyakshi, Bora, Pranjal Kumar, Almasri, Ammar, Alkhawaldeh, Rami S., Hussain, Sadiq, Alatas, Bilal, Shoeibi, Afshin, Moosaei, Hossein, Hladík, Milan, Nahavandi, Saeid, Pardalos, Panos M.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Emphasizes the need for transparency to build healthcare professionals' trust in AI systems.•Addresses the critical need for explainability due to potential high-impact consequences of AI errors in healthcare.•Categorizes XAI methods into six groups for healthcare research: feature-oriented, global, concept, surrogate, local pixel-based, and human-centric.•Analyzes the significance of XAI in overcoming healthcare-specific challenges.•Provides an exhaustive review of XAI applications and relevant experimental results in healthcare contexts. Explainable Artificial Intelligence (XAI) encompasses the strategies and methodologies used in constructing AI systems that enable end-users to comprehend and interpret the outputs and predictions made by AI models. The increasing deployment of opaque AI applications in high-stakes fields, particularly healthcare, has amplified the need for clarity and explainability. This stems from the potential high-impact consequences of erroneous AI predictions in such critical sectors. The effective integration of AI models in healthcare hinges on the capacity of these models to be both explainable and interpretable. Gaining the trust of healthcare professionals necessitates AI applications to be transparent about their decision-making processes and underlying logic. Our paper conducts a systematic review of the various facets and challenges of XAI within the healthcare realm. It aims to dissect a range of XAI methodologies and their applications in healthcare, categorizing them into six distinct groups: feature-oriented methods, global methods, concept models, surrogate models, local pixel-based methods, and human-centric approaches. Specifically, this study focuses on the significance of XAI in addressing healthcare-related challenges, underscoring its vital role in safety-critical scenarios. Our objective is to provide an exhaustive exploration of XAI's applications in healthcare, alongside an analysis of relevant experimental outcomes, thereby fostering a holistic understanding of XAI's role and potential in this critical domain. [Display omitted]
ISSN:0045-7906
DOI:10.1016/j.compeleceng.2024.109370