Loading…

Transferability Properties of Graph Neural Networks

Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to l...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing 2023-01, Vol.71, p.1-16
Main Authors: Ruiz, Luana, Chamon, Luiz F. O., Ribeiro, Alejandro
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43
cites cdi_FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43
container_end_page 16
container_issue
container_start_page 1
container_title IEEE transactions on signal processing
container_volume 71
creator Ruiz, Luana
Chamon, Luiz F. O.
Ribeiro, Alejandro
description Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to learn on large-scale graphs. In this paper, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs. We use graph limits called graphons to define limit objects for graph filters and GNNs-graphon filters and graphon neural networks (WNNs)-which we interpret as generative models for graph filters and GNNs. We then show that graphon filters and WNNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Because the error of these approximations can be upper bounded, by a triangle inequality argument we can further bound the error of transferring a graph filter or a GNN across graphs. Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity. These findings are demonstrated empirically in a recommendation problem and in a decentralized control task.
doi_str_mv 10.1109/TSP.2023.3297848
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TSP_2023_3297848</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10190182</ieee_id><sourcerecordid>2872432157</sourcerecordid><originalsourceid>FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43</originalsourceid><addsrcrecordid>eNpNkM9LAzEQhYMoWKt3Dx4WPG_NJNn8OErRKogWrOAtJOsEt9buOtki_e_d0h48vTl87w18jF0CnwBwd7N4nU8EF3IihTNW2SM2Aqeg5Mro4-HmlSwra95P2VnOS85BKadHTC4orHNCCrFZNf22mFPbIfUN5qJNxYxC91k844bCaoj-t6WvfM5OUlhlvDjkmL3d3y2mD-XTy-xxevtU1sKJvtRamogBTG250CbpCFWKNUhMKij8ELUCGQ13lZYphmQjqEqHUEdM2gQlx-x6v9tR-7PB3Ptlu6H18NILa4SSAiozUHxP1dTmTJh8R813oK0H7ndq_KDG79T4g5qhcrWvNIj4DwfHwQr5BzEoX6E</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2872432157</pqid></control><display><type>article</type><title>Transferability Properties of Graph Neural Networks</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Ruiz, Luana ; Chamon, Luiz F. O. ; Ribeiro, Alejandro</creator><creatorcontrib>Ruiz, Luana ; Chamon, Luiz F. O. ; Ribeiro, Alejandro</creatorcontrib><description>Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to learn on large-scale graphs. In this paper, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs. We use graph limits called graphons to define limit objects for graph filters and GNNs-graphon filters and graphon neural networks (WNNs)-which we interpret as generative models for graph filters and GNNs. We then show that graphon filters and WNNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Because the error of these approximations can be upper bounded, by a triangle inequality argument we can further bound the error of transferring a graph filter or a GNN across graphs. Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity. These findings are demonstrated empirically in a recommendation problem and in a decentralized control task.</description><identifier>ISSN: 1053-587X</identifier><identifier>EISSN: 1941-0476</identifier><identifier>DOI: 10.1109/TSP.2023.3297848</identifier><identifier>CODEN: ITPRED</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Approximation ; Behavioral sciences ; Control tasks ; Convolution ; Decentralized control ; Errors ; Filtering theory ; Graph neural networks ; graph signal processing ; graphons ; Graphs ; Neural networks ; Nonlinearity ; Stochastic processes ; Task analysis ; Training ; transferability</subject><ispartof>IEEE transactions on signal processing, 2023-01, Vol.71, p.1-16</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43</citedby><cites>FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43</cites><orcidid>0000-0002-9666-1211 ; 0000-0001-7731-6650 ; 0000-0003-4230-9906</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10190182$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,54774</link.rule.ids></links><search><creatorcontrib>Ruiz, Luana</creatorcontrib><creatorcontrib>Chamon, Luiz F. O.</creatorcontrib><creatorcontrib>Ribeiro, Alejandro</creatorcontrib><title>Transferability Properties of Graph Neural Networks</title><title>IEEE transactions on signal processing</title><addtitle>TSP</addtitle><description>Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to learn on large-scale graphs. In this paper, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs. We use graph limits called graphons to define limit objects for graph filters and GNNs-graphon filters and graphon neural networks (WNNs)-which we interpret as generative models for graph filters and GNNs. We then show that graphon filters and WNNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Because the error of these approximations can be upper bounded, by a triangle inequality argument we can further bound the error of transferring a graph filter or a GNN across graphs. Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity. These findings are demonstrated empirically in a recommendation problem and in a decentralized control task.</description><subject>Approximation</subject><subject>Behavioral sciences</subject><subject>Control tasks</subject><subject>Convolution</subject><subject>Decentralized control</subject><subject>Errors</subject><subject>Filtering theory</subject><subject>Graph neural networks</subject><subject>graph signal processing</subject><subject>graphons</subject><subject>Graphs</subject><subject>Neural networks</subject><subject>Nonlinearity</subject><subject>Stochastic processes</subject><subject>Task analysis</subject><subject>Training</subject><subject>transferability</subject><issn>1053-587X</issn><issn>1941-0476</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpNkM9LAzEQhYMoWKt3Dx4WPG_NJNn8OErRKogWrOAtJOsEt9buOtki_e_d0h48vTl87w18jF0CnwBwd7N4nU8EF3IihTNW2SM2Aqeg5Mro4-HmlSwra95P2VnOS85BKadHTC4orHNCCrFZNf22mFPbIfUN5qJNxYxC91k844bCaoj-t6WvfM5OUlhlvDjkmL3d3y2mD-XTy-xxevtU1sKJvtRamogBTG250CbpCFWKNUhMKij8ELUCGQ13lZYphmQjqEqHUEdM2gQlx-x6v9tR-7PB3Ptlu6H18NILa4SSAiozUHxP1dTmTJh8R813oK0H7ndq_KDG79T4g5qhcrWvNIj4DwfHwQr5BzEoX6E</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Ruiz, Luana</creator><creator>Chamon, Luiz F. O.</creator><creator>Ribeiro, Alejandro</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-9666-1211</orcidid><orcidid>https://orcid.org/0000-0001-7731-6650</orcidid><orcidid>https://orcid.org/0000-0003-4230-9906</orcidid></search><sort><creationdate>20230101</creationdate><title>Transferability Properties of Graph Neural Networks</title><author>Ruiz, Luana ; Chamon, Luiz F. O. ; Ribeiro, Alejandro</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Approximation</topic><topic>Behavioral sciences</topic><topic>Control tasks</topic><topic>Convolution</topic><topic>Decentralized control</topic><topic>Errors</topic><topic>Filtering theory</topic><topic>Graph neural networks</topic><topic>graph signal processing</topic><topic>graphons</topic><topic>Graphs</topic><topic>Neural networks</topic><topic>Nonlinearity</topic><topic>Stochastic processes</topic><topic>Task analysis</topic><topic>Training</topic><topic>transferability</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ruiz, Luana</creatorcontrib><creatorcontrib>Chamon, Luiz F. O.</creatorcontrib><creatorcontrib>Ribeiro, Alejandro</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ruiz, Luana</au><au>Chamon, Luiz F. O.</au><au>Ribeiro, Alejandro</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Transferability Properties of Graph Neural Networks</atitle><jtitle>IEEE transactions on signal processing</jtitle><stitle>TSP</stitle><date>2023-01-01</date><risdate>2023</risdate><volume>71</volume><spage>1</spage><epage>16</epage><pages>1-16</pages><issn>1053-587X</issn><eissn>1941-0476</eissn><coden>ITPRED</coden><abstract>Graph neural networks (GNNs) are composed of layers consisting of graph convolutions and pointwise nonlinearities. Due to their invariance and stability properties, GNNs are provably successful at learning representations from data supported on moderate-scale graphs. However, they are difficult to learn on large-scale graphs. In this paper, we study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs. We use graph limits called graphons to define limit objects for graph filters and GNNs-graphon filters and graphon neural networks (WNNs)-which we interpret as generative models for graph filters and GNNs. We then show that graphon filters and WNNs can be approximated by graph filters and GNNs sampled from them on weighted and stochastic graphs. Because the error of these approximations can be upper bounded, by a triangle inequality argument we can further bound the error of transferring a graph filter or a GNN across graphs. Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity. These findings are demonstrated empirically in a recommendation problem and in a decentralized control task.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TSP.2023.3297848</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-9666-1211</orcidid><orcidid>https://orcid.org/0000-0001-7731-6650</orcidid><orcidid>https://orcid.org/0000-0003-4230-9906</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1053-587X
ispartof IEEE transactions on signal processing, 2023-01, Vol.71, p.1-16
issn 1053-587X
1941-0476
language eng
recordid cdi_crossref_primary_10_1109_TSP_2023_3297848
source IEEE Electronic Library (IEL) Journals
subjects Approximation
Behavioral sciences
Control tasks
Convolution
Decentralized control
Errors
Filtering theory
Graph neural networks
graph signal processing
graphons
Graphs
Neural networks
Nonlinearity
Stochastic processes
Task analysis
Training
transferability
title Transferability Properties of Graph Neural Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T22%3A27%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Transferability%20Properties%20of%20Graph%20Neural%20Networks&rft.jtitle=IEEE%20transactions%20on%20signal%20processing&rft.au=Ruiz,%20Luana&rft.date=2023-01-01&rft.volume=71&rft.spage=1&rft.epage=16&rft.pages=1-16&rft.issn=1053-587X&rft.eissn=1941-0476&rft.coden=ITPRED&rft_id=info:doi/10.1109/TSP.2023.3297848&rft_dat=%3Cproquest_cross%3E2872432157%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c292t-6637bea17c80267f6b15fbc13ef4a4ed2c413b709563fbaf8b1456aacbef67a43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2872432157&rft_id=info:pmid/&rft_ieee_id=10190182&rfr_iscdi=true