Loading…

The Road to Explainable Graph Neural Networks

GOAL: Graph Neural Networks (GNNs) are being increasingly adopted in various real-world applications, including drug and material discovery [16, 17, 27], recommendation engines [29], congestion modeling on road networks [5], and weather forecasting [11]. However, similar to other deeplearning models...

Full description

Saved in:
Bibliographic Details
Published in:SIGMOD record 2024-11, Vol.53 (3), p.32-34
Main Author: Ranu, Sayan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:GOAL: Graph Neural Networks (GNNs) are being increasingly adopted in various real-world applications, including drug and material discovery [16, 17, 27], recommendation engines [29], congestion modeling on road networks [5], and weather forecasting [11]. However, similar to other deeplearning models, GNNs are considered black boxes due to their limited capacity to provide explanations for their predictions. This lack of interpretability poses a significant barrier to their adoption in critical domains such as healthcare, finance, and law enforcement, where transparency and trustworthiness are essential for decision-making processes. In these pivotal domains, understanding the rationale behind model predictions is crucial not only for compliance with interpretability requirements but also for identifying potential vulnerabilities and gaining insights to refine the model further. How do we make GNNs interpretable? This is a central motivating question driving my research pursuits.
ISSN:0163-5808
DOI:10.1145/3703922.3703930