Loading…

Explaining the Explainers in Graph Neural Networks: a Comparative Study

Following a fast initial breakthrough in graph based learning, Graph Neural Networks (GNNs) have reached a widespread application in many science and engineering fields, prompting the need for methods to understand their decision process. GNN explainers have started to emerge in recent years, with a...

Full description

Saved in:
Bibliographic Details
Published in:ACM computing surveys 2024-09
Main Authors: Longa, Antonio, Azzolin, Steve, Santin, Gabriele, Cencetti, Giulia, Lio, Pietro, Lepri, Bruno, Passerini, Andrea
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Following a fast initial breakthrough in graph based learning, Graph Neural Networks (GNNs) have reached a widespread application in many science and engineering fields, prompting the need for methods to understand their decision process. GNN explainers have started to emerge in recent years, with a multitude of methods both novel or adapted from other domains. To sort out this plethora of alternative approaches, several studies have benchmarked the performance of different explainers in terms of various explainability metrics. However, these earlier works make no attempts at providing insights into why different GNN architectures are more or less explainable, or which explainer should be preferred in a given setting. In this survey we fill these gaps by devising a systematic experimental study, which tests twelve explainers on eight representative message-passing architectures trained on six carefully designed graph and node classification datasets. With our results we provide key insights on the choice and applicability of GNN explainers, we isolate key components that make them usable and successful and provide recommendations on how to avoid common interpretation pitfalls. We conclude by highlighting open questions and directions of possible future research.
ISSN:0360-0300
1557-7341
DOI:10.1145/3696444