Loading…
Evaluating the necessity of the multiple metrics for assessing explainable AI: A critical examination
This paper investigates the specific properties of Explainable Artificial Intelligence (xAI), particularly when implemented in AI/ML models across high-stakes sectors, in this case cybersecurity. The authors execute a comprehensive systematic review of xAI properties, various evaluation metrics, and...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2024-10, Vol.602, p.128282, Article 128282 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper investigates the specific properties of Explainable Artificial Intelligence (xAI), particularly when implemented in AI/ML models across high-stakes sectors, in this case cybersecurity. The authors execute a comprehensive systematic review of xAI properties, various evaluation metrics, and existing frameworks to assess their utility and relevance. Subsequently, the experimental sections evaluate selected xAI techniques against these metrics, delivering key insights into their practical utility and effectiveness. The findings highlight that the proliferation of metrics enhances the understanding of xAI systems but simultaneously exposes challenges such as metric duplication, inefficacy, and confusion. These issues underscore the pressing need for standardized evaluation frameworks to streamline their application and strengthen their effectiveness, thereby improving the overall utility of xAI in critical domains.
•Bridging xAI theory and practice.•Systematic review of xAI metrics and frameworks.•Experimental evaluation of various xAI explanations.•The results show many metrics are ine2ective.•The abundance of metrics has pros and cons. |
---|---|
ISSN: | 0925-2312 |
DOI: | 10.1016/j.neucom.2024.128282 |