Loading…

Deep Graph Neural Networks with Shallow Subgraph Samplers

While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-03
Main Authors: Zeng, Hanqing, Zhang, Muhan, Xia, Yinglong, Srivastava, Ajitesh, Malevich, Andrey, Kannan, Rajgopal, Prasanna, Viktor, Long, Jin, Chen, Ren
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zeng, Hanqing
Zhang, Muhan
Xia, Yinglong
Srivastava, Ajitesh
Malevich, Andrey
Kannan, Rajgopal
Prasanna, Viktor
Long, Jin
Chen, Ren
description While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2466557553</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2466557553</sourcerecordid><originalsourceid>FETCH-proquest_journals_24665575533</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwdElNLVBwL0osyFDwSy0tSswBUiXl-UXZxQrlmSUZCsEZiTk5-eUKwaVJ6WBVwYm5BTmpRcU8DKxpiTnFqbxQmptB2c01xNlDt6Aov7A0tbgkPiu_tCgPKBVvZGJmZmpqDrTQmDhVAF62Nks</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2466557553</pqid></control><display><type>article</type><title>Deep Graph Neural Networks with Shallow Subgraph Samplers</title><source>Publicly Available Content Database</source><creator>Zeng, Hanqing ; Zhang, Muhan ; Xia, Yinglong ; Srivastava, Ajitesh ; Malevich, Andrey ; Kannan, Rajgopal ; Prasanna, Viktor ; Long, Jin ; Chen, Ren</creator><creatorcontrib>Zeng, Hanqing ; Zhang, Muhan ; Xia, Yinglong ; Srivastava, Ajitesh ; Malevich, Andrey ; Kannan, Rajgopal ; Prasanna, Viktor ; Long, Jin ; Chen, Ren</creatorcontrib><description>While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Algorithms ; Graph neural networks ; Graph theory ; Graphical representations ; Graphs ; Machine learning ; Model accuracy ; Neural networks ; Samplers ; White noise</subject><ispartof>arXiv.org, 2022-03</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2466557553?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Zeng, Hanqing</creatorcontrib><creatorcontrib>Zhang, Muhan</creatorcontrib><creatorcontrib>Xia, Yinglong</creatorcontrib><creatorcontrib>Srivastava, Ajitesh</creatorcontrib><creatorcontrib>Malevich, Andrey</creatorcontrib><creatorcontrib>Kannan, Rajgopal</creatorcontrib><creatorcontrib>Prasanna, Viktor</creatorcontrib><creatorcontrib>Long, Jin</creatorcontrib><creatorcontrib>Chen, Ren</creatorcontrib><title>Deep Graph Neural Networks with Shallow Subgraph Samplers</title><title>arXiv.org</title><description>While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Graph neural networks</subject><subject>Graph theory</subject><subject>Graphical representations</subject><subject>Graphs</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Neural networks</subject><subject>Samplers</subject><subject>White noise</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSwdElNLVBwL0osyFDwSy0tSswBUiXl-UXZxQrlmSUZCsEZiTk5-eUKwaVJ6WBVwYm5BTmpRcU8DKxpiTnFqbxQmptB2c01xNlDt6Aov7A0tbgkPiu_tCgPKBVvZGJmZmpqDrTQmDhVAF62Nks</recordid><startdate>20220323</startdate><enddate>20220323</enddate><creator>Zeng, Hanqing</creator><creator>Zhang, Muhan</creator><creator>Xia, Yinglong</creator><creator>Srivastava, Ajitesh</creator><creator>Malevich, Andrey</creator><creator>Kannan, Rajgopal</creator><creator>Prasanna, Viktor</creator><creator>Long, Jin</creator><creator>Chen, Ren</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220323</creationdate><title>Deep Graph Neural Networks with Shallow Subgraph Samplers</title><author>Zeng, Hanqing ; Zhang, Muhan ; Xia, Yinglong ; Srivastava, Ajitesh ; Malevich, Andrey ; Kannan, Rajgopal ; Prasanna, Viktor ; Long, Jin ; Chen, Ren</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24665575533</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Graph neural networks</topic><topic>Graph theory</topic><topic>Graphical representations</topic><topic>Graphs</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Neural networks</topic><topic>Samplers</topic><topic>White noise</topic><toplevel>online_resources</toplevel><creatorcontrib>Zeng, Hanqing</creatorcontrib><creatorcontrib>Zhang, Muhan</creatorcontrib><creatorcontrib>Xia, Yinglong</creatorcontrib><creatorcontrib>Srivastava, Ajitesh</creatorcontrib><creatorcontrib>Malevich, Andrey</creatorcontrib><creatorcontrib>Kannan, Rajgopal</creatorcontrib><creatorcontrib>Prasanna, Viktor</creatorcontrib><creatorcontrib>Long, Jin</creatorcontrib><creatorcontrib>Chen, Ren</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zeng, Hanqing</au><au>Zhang, Muhan</au><au>Xia, Yinglong</au><au>Srivastava, Ajitesh</au><au>Malevich, Andrey</au><au>Kannan, Rajgopal</au><au>Prasanna, Viktor</au><au>Long, Jin</au><au>Chen, Ren</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Deep Graph Neural Networks with Shallow Subgraph Samplers</atitle><jtitle>arXiv.org</jtitle><date>2022-03-23</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>While Graph Neural Networks (GNNs) are powerful models for learning representations on graphs, most state-of-the-art models do not have significant accuracy gain beyond two to three layers. Deep GNNs fundamentally need to address: 1). expressivity challenge due to oversmoothing, and 2). computation challenge due to neighborhood explosion. We propose a simple "deep GNN, shallow sampler" design principle to improve both the GNN accuracy and efficiency -- to generate representation of a target node, we use a deep GNN to pass messages only within a shallow, localized subgraph. A properly sampled subgraph may exclude irrelevant or even noisy nodes, and still preserve the critical neighbor features and graph structures. The deep GNN then smooths the informative local signals to enhance feature learning, rather than oversmoothing the global graph signals into just "white noise". We theoretically justify why the combination of deep GNNs with shallow samplers yields the best learning performance. We then propose various sampling algorithms and neural architecture extensions to achieve good empirical results. On the largest public graph dataset, ogbn-papers100M, we achieve state-of-the-art accuracy with an order of magnitude reduction in hardware cost.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2466557553
source Publicly Available Content Database
subjects Accuracy
Algorithms
Graph neural networks
Graph theory
Graphical representations
Graphs
Machine learning
Model accuracy
Neural networks
Samplers
White noise
title Deep Graph Neural Networks with Shallow Subgraph Samplers
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T14%3A45%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Deep%20Graph%20Neural%20Networks%20with%20Shallow%20Subgraph%20Samplers&rft.jtitle=arXiv.org&rft.au=Zeng,%20Hanqing&rft.date=2022-03-23&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2466557553%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24665575533%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2466557553&rft_id=info:pmid/&rfr_iscdi=true