Loading…

Learning by Transference: Training Graph Neural Networks on Growing Graphs

Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful feature representations from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon-the limit object...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing 2023-01, Vol.71, p.1-15
Main Authors: Cervino, Juan, Ruiz, Luana, Ribeiro, Alejandro
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph neural networks (GNNs) use graph convolutions to exploit network invariances and learn meaningful feature representations from network data. However, on large-scale graphs convolutions incur in high computational cost, leading to scalability limitations. Leveraging the graphon-the limit object of a graph-in this paper we consider the problem of learning a graphon neural network (WNN)-the limit object of a GNN-by training GNNs on graphs sampled from the graphon. Under smoothness conditions, we show that: (i) the expected distance between the learning steps on the GNN and on the WNN decreases asymptotically with the size of the graph, and (ii) when training on a sequence of growing graphs, gradient descent follows the learning direction of the WNN. Inspired by these results, we propose a novel algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training. This algorithm is further benchmarked on a recommendation system and a decentralized control problem, where it retains comparable performance to its large-scale counterpart at a reduced computational cost.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2023.3242374