Loading…

Taxonomy of Benchmarks in Graph Representation Learning

Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-11
Main Authors: Liu, Renming, Cantürk, Semih, Wenkel, Frederik, McGuire, Sarah, Wang, Xinyi, Little, Anna, O'Bray, Leslie, Perlmutter, Michael, Rieck, Bastian, Hirn, Matthew, Wolf, Guy, Rampášek, Ladislav
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph Neural Networks (GNNs) extend the success of neural networks to graph-structured data by accounting for their intrinsic geometry. While extensive research has been done on developing GNN models with superior performance according to a collection of graph representation learning benchmarks, it is currently not well understood what aspects of a given model are probed by them. For example, to what extent do they test the ability of a model to leverage graph structure vs. node features? Here, we develop a principled approach to taxonomize benchmarking datasets according to a \(\textit{sensitivity profile}\) that is based on how much GNN performance changes due to a collection of graph perturbations. Our data-driven analysis provides a deeper understanding of which benchmarking data characteristics are leveraged by GNNs. Consequently, our taxonomy can aid in selection and development of adequate graph benchmarks, and better informed evaluation of future GNN methods. Finally, our approach and implementation in \(\texttt{GTaxoGym}\) package are extendable to multiple graph prediction task types and future datasets.
ISSN:2331-8422