Loading…

Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks

Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data. However, GNNs are fundamentally limited by their tree-structured inductive bias: the WL-subtree kernel formulation bounds the representational capacity of GNNs, and polynomial-tim...

Full description

Saved in:
Bibliographic Details
Main Authors: Sandfelder, Dylan, Vijayan, Priyesh, Hamilton, William L.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 8527
container_issue
container_start_page 8523
container_title
container_volume
creator Sandfelder, Dylan
Vijayan, Priyesh
Hamilton, William L.
description Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data. However, GNNs are fundamentally limited by their tree-structured inductive bias: the WL-subtree kernel formulation bounds the representational capacity of GNNs, and polynomial-time GNNs are provably incapable of recognizing triangles in a graph. In this work, we propose to augment the GNN message-passing operations with information de-fined on ego graphs (i.e., the induced subgraph surrounding each node). We term these approaches Ego-GNNs and show that Ego-GNNs are provably more powerful than standard message-passing GNNs. In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs. We also motivate our approach from the perspective of graph signal processing as a form of multiplex graph convolution. Experimental results on node classification using synthetic and real data highlight the achievable performance gains using this approach.
doi_str_mv 10.1109/ICASSP39728.2021.9414015
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9414015</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9414015</ieee_id><sourcerecordid>9414015</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-d388e87ec911c5caa88c7fc09d084aa630741114acf4ba0579e25464948bc9033</originalsourceid><addsrcrecordid>eNotj11LwzAYRqMgOOd-gTf5A53vm48mEW9k1CqMKlTBu5Fl6YzWtSQt6r-34K4OnIuH8xBCEZaIYK4fV3d1_cyNYnrJgOHSCBSA8oQsjNI4aVQ5SHlKZowrk6GBt3NykdIHAGgl9IzcFvsuK6sq3dDip2-7MITDnk6S1kMc3TBGn2g40DLa_p1Wfoy2nTB8d_EzXZKzxrbJL46ck9f74mX1kK2fyqltnQUGfMh2XGuvlXcG0UlnrdZONQ7MDrSwNuegBCIK6xqxtSCV8UyKXBiht84A53Ny9b8bvPebPoYvG383x7P8D8rCSKE</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks</title><source>IEEE Xplore All Conference Series</source><creator>Sandfelder, Dylan ; Vijayan, Priyesh ; Hamilton, William L.</creator><creatorcontrib>Sandfelder, Dylan ; Vijayan, Priyesh ; Hamilton, William L.</creatorcontrib><description>Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data. However, GNNs are fundamentally limited by their tree-structured inductive bias: the WL-subtree kernel formulation bounds the representational capacity of GNNs, and polynomial-time GNNs are provably incapable of recognizing triangles in a graph. In this work, we propose to augment the GNN message-passing operations with information de-fined on ego graphs (i.e., the induced subgraph surrounding each node). We term these approaches Ego-GNNs and show that Ego-GNNs are provably more powerful than standard message-passing GNNs. In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs. We also motivate our approach from the perspective of graph signal processing as a form of multiplex graph convolution. Experimental results on node classification using synthetic and real data highlight the achievable performance gains using this approach.</description><identifier>EISSN: 2379-190X</identifier><identifier>EISBN: 9781728176055</identifier><identifier>EISBN: 1728176050</identifier><identifier>DOI: 10.1109/ICASSP39728.2021.9414015</identifier><language>eng</language><publisher>IEEE</publisher><subject>Analytical models ; Convolution ; Deep learning ; Graph neural networks ; Image processing ; Multiplexing ; Performance gain</subject><ispartof>ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, p.8523-8527</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9414015$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27904,54534,54911</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9414015$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Sandfelder, Dylan</creatorcontrib><creatorcontrib>Vijayan, Priyesh</creatorcontrib><creatorcontrib>Hamilton, William L.</creatorcontrib><title>Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks</title><title>ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title><addtitle>ICASSP</addtitle><description>Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data. However, GNNs are fundamentally limited by their tree-structured inductive bias: the WL-subtree kernel formulation bounds the representational capacity of GNNs, and polynomial-time GNNs are provably incapable of recognizing triangles in a graph. In this work, we propose to augment the GNN message-passing operations with information de-fined on ego graphs (i.e., the induced subgraph surrounding each node). We term these approaches Ego-GNNs and show that Ego-GNNs are provably more powerful than standard message-passing GNNs. In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs. We also motivate our approach from the perspective of graph signal processing as a form of multiplex graph convolution. Experimental results on node classification using synthetic and real data highlight the achievable performance gains using this approach.</description><subject>Analytical models</subject><subject>Convolution</subject><subject>Deep learning</subject><subject>Graph neural networks</subject><subject>Image processing</subject><subject>Multiplexing</subject><subject>Performance gain</subject><issn>2379-190X</issn><isbn>9781728176055</isbn><isbn>1728176050</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj11LwzAYRqMgOOd-gTf5A53vm48mEW9k1CqMKlTBu5Fl6YzWtSQt6r-34K4OnIuH8xBCEZaIYK4fV3d1_cyNYnrJgOHSCBSA8oQsjNI4aVQ5SHlKZowrk6GBt3NykdIHAGgl9IzcFvsuK6sq3dDip2-7MITDnk6S1kMc3TBGn2g40DLa_p1Wfoy2nTB8d_EzXZKzxrbJL46ck9f74mX1kK2fyqltnQUGfMh2XGuvlXcG0UlnrdZONQ7MDrSwNuegBCIK6xqxtSCV8UyKXBiht84A53Ny9b8bvPebPoYvG383x7P8D8rCSKE</recordid><startdate>20210606</startdate><enddate>20210606</enddate><creator>Sandfelder, Dylan</creator><creator>Vijayan, Priyesh</creator><creator>Hamilton, William L.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20210606</creationdate><title>Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks</title><author>Sandfelder, Dylan ; Vijayan, Priyesh ; Hamilton, William L.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-d388e87ec911c5caa88c7fc09d084aa630741114acf4ba0579e25464948bc9033</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Analytical models</topic><topic>Convolution</topic><topic>Deep learning</topic><topic>Graph neural networks</topic><topic>Image processing</topic><topic>Multiplexing</topic><topic>Performance gain</topic><toplevel>online_resources</toplevel><creatorcontrib>Sandfelder, Dylan</creatorcontrib><creatorcontrib>Vijayan, Priyesh</creatorcontrib><creatorcontrib>Hamilton, William L.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Sandfelder, Dylan</au><au>Vijayan, Priyesh</au><au>Hamilton, William L.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks</atitle><btitle>ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</btitle><stitle>ICASSP</stitle><date>2021-06-06</date><risdate>2021</risdate><spage>8523</spage><epage>8527</epage><pages>8523-8527</pages><eissn>2379-190X</eissn><eisbn>9781728176055</eisbn><eisbn>1728176050</eisbn><abstract>Graph neural networks (GNNs) have achieved remarkable success as a framework for deep learning on graph-structured data. However, GNNs are fundamentally limited by their tree-structured inductive bias: the WL-subtree kernel formulation bounds the representational capacity of GNNs, and polynomial-time GNNs are provably incapable of recognizing triangles in a graph. In this work, we propose to augment the GNN message-passing operations with information de-fined on ego graphs (i.e., the induced subgraph surrounding each node). We term these approaches Ego-GNNs and show that Ego-GNNs are provably more powerful than standard message-passing GNNs. In particular, we show that Ego-GNNs are capable of recognizing closed triangles, which is essential given the prominence of transitivity in real-world graphs. We also motivate our approach from the perspective of graph signal processing as a form of multiplex graph convolution. Experimental results on node classification using synthetic and real data highlight the achievable performance gains using this approach.</abstract><pub>IEEE</pub><doi>10.1109/ICASSP39728.2021.9414015</doi><tpages>5</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2379-190X
ispartof ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021, p.8523-8527
issn 2379-190X
language eng
recordid cdi_ieee_primary_9414015
source IEEE Xplore All Conference Series
subjects Analytical models
Convolution
Deep learning
Graph neural networks
Image processing
Multiplexing
Performance gain
title Ego-GNNs: Exploiting Ego Structures in Graph Neural Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T12%3A06%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Ego-GNNs:%20Exploiting%20Ego%20Structures%20in%20Graph%20Neural%20Networks&rft.btitle=ICASSP%202021%20-%202021%20IEEE%20International%20Conference%20on%20Acoustics,%20Speech%20and%20Signal%20Processing%20(ICASSP)&rft.au=Sandfelder,%20Dylan&rft.date=2021-06-06&rft.spage=8523&rft.epage=8527&rft.pages=8523-8527&rft.eissn=2379-190X&rft_id=info:doi/10.1109/ICASSP39728.2021.9414015&rft.eisbn=9781728176055&rft.eisbn_list=1728176050&rft_dat=%3Cieee_CHZPO%3E9414015%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-d388e87ec911c5caa88c7fc09d084aa630741114acf4ba0579e25464948bc9033%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9414015&rfr_iscdi=true