Loading…
Nature vs. Nurture: Feature vs. Structure for Graph Neural Networks
•Node features and graph structure have impacts to the performance of GNN models.•Graphs can be constructed by connecting node features via a latent function.•GNNs can be used to reconstruct node features from graph structure.•New graph structures can be created by node features with better predicta...
Saved in:
Published in: | Pattern recognition letters 2022-07, Vol.159, p.46-53 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •Node features and graph structure have impacts to the performance of GNN models.•Graphs can be constructed by connecting node features via a latent function.•GNNs can be used to reconstruct node features from graph structure.•New graph structures can be created by node features with better predictability.•The characteristics of a GNN model can be transferred to unseen graphs.
Graph neural networks take node features and graph structure as input to build representations for nodes and graphs. While there are a lot of focus on GNN models, understanding the impact of node features and graph structure to GNN performance has received less attention. In this paper, we propose an explanation for the connection between features and structure: graphs can be constructed by connecting node features according to a latent function. While this hypothesis seems trivial, it has several important implications. First, it allows us to define graph families which we use to explain the transferability of GNN models. Second, it enables application of GNNs for featureless graphs by reconstructing node features from graph structure. Third, it predicts the existence of a latent function which can create graphs that when used with original features in a GNN outperform original graphs for a specific task. We propose a graph generative model to learn such function. Finally, our experiments confirm the hypothesis and these implications. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2022.04.036 |