Loading…
Current status and future directions of explainable artificial intelligence in medical imaging
•Explainability in AI (XAI) is key for fostering trust but current XAI methods remain complex for clinicians.•AI explainability requires aligning with medical practitioners’ cognitive processes.•Explores how fast (type 1) and slow (type 2) decision-making can be similar to algorithmsapplied in medic...
Saved in:
Published in: | European journal of radiology 2025-02, Vol.183, p.111884, Article 111884 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Explainability in AI (XAI) is key for fostering trust but current XAI methods remain complex for clinicians.•AI explainability requires aligning with medical practitioners’ cognitive processes.•Explores how fast (type 1) and slow (type 2) decision-making can be similar to algorithmsapplied in medical AI imaging.•Commercialized AI systems offer explanation via heatmaps to indicate important regions.•Future AI in healthcare must balance accuracy, explainability, and clinical outcomes.
The inherent “black box” nature of AI algorithms presents a substantial barrier to the widespread adoption of the technology in clinical settings, leading to a lack of trust among users. This review begins by examining the foundational stages involved in the interpretation of medical images by radiologists and clinicians, encompassing both type 1 (fast thinking − ability of the brain to think and act intuitively) and type 2 (slow analytical − slow analytical, laborious approach to decision-making) decision-making processes. The discussion then delves into current Explainable AI (XAI) approaches, exploring both inherent and post-hoc explainability for medical imaging applications and highlighting the milestones achieved. XAI in medicine refers to AI system designed to provide transparent, interpretable, and understandable reasoning behind AI predictions or decisions. Additionally, the paper showcases some commercial AI medical systems that offer explanations through features such as heatmaps. Opportunities, challenges and potential avenues for advancing the field are also addressed. In conclusion, the review observes that state-of-the-art XAI methods are not mature enough for implementation, as the explanations they provide are challenging for medical experts to comprehend. Deeper understanding of the cognitive mechanisms by medical professionals is important in aiming to develop more interpretable XAI methods. |
---|---|
ISSN: | 0720-048X 1872-7727 1872-7727 |
DOI: | 10.1016/j.ejrad.2024.111884 |