Loading…

Toward Interpretable Graph Neural Networks via Concept Matching Model

Graph Neural Networks have achieved notable success, yet explaining their rationales remains a challenging problem. Existing methods, including post-hoc and interpretable approaches, have numerous limitations. Post-hoc methods treat models as black boxes and can mislead users, while interpretable mo...

Full description

Saved in:
Bibliographic Details
Main Authors: Bui, Tien-Cuong, Li, Wen-Syan
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph Neural Networks have achieved notable success, yet explaining their rationales remains a challenging problem. Existing methods, including post-hoc and interpretable approaches, have numerous limitations. Post-hoc methods treat models as black boxes and can mislead users, while interpretable models often overlook user-centric explanations. Furthermore, most existing methods do not carefully consider the user's perception of explanations, potentially resulting in explanation-user mismatches. To address these problems, we propose a novel interpretable concept-matching model to enhance GNN interpretability and prediction accuracy. The proposed model extracts frequent concepts from input graphs using the graph information bottleneck theory and modified constraints. These concepts are managed in an in-memory concept corpus for efficient inference lookups and explanation generation. Various explanation construction features are implemented based on the concept corpus and the discovery module, aiming to fulfill diverse user preferences. Extensive experiments and a user study validate the performance of the proposed approach, showcasing its potential for improving model accuracy and interpretability.
ISSN:2374-8486
DOI:10.1109/ICDM58522.2023.00106