Loading…

Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification

Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-02
Main Authors: Li, Xiaoxiao, Saude, Joao
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Li, Xiaoxiao
Saude, Joao
description Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features' explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the key factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2350656876</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2350656876</sourcerecordid><originalsourceid>FETCH-proquest_journals_23506568763</originalsourceid><addsrcrecordid>eNqNi70KwjAYAIMgWLTvEHAuxMT-7KXVqZMiuJRgvtrUkNR8Kfr4dugDON1wdysScSEOSXHkfENixIExxrOcp6mIyL36jkZqS09ejj1tYPLSzAgf519Ig6NXq8BjkFbRG-hnH0AtcQ0yTB6QznvjFNDSSETd6YcM2tkdWXfSIMQLt2RfV5fynIzevSfA0A5u8nZWLRcpy9KsyDPxX_UDETFCjA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2350656876</pqid></control><display><type>article</type><title>Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification</title><source>Publicly Available Content Database</source><creator>Li, Xiaoxiao ; Saude, Joao</creator><creatorcontrib>Li, Xiaoxiao ; Saude, Joao</creatorcontrib><description>Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features' explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the key factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Classification ; Debugging ; Decisions ; Graph neural networks ; Graph theory ; Graphical representations ; Identification methods ; Labeling ; Linear transformations ; Neural networks ; Nodes</subject><ispartof>arXiv.org, 2020-02</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2350656876?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Li, Xiaoxiao</creatorcontrib><creatorcontrib>Saude, Joao</creatorcontrib><title>Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification</title><title>arXiv.org</title><description>Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features' explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the key factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions.</description><subject>Classification</subject><subject>Debugging</subject><subject>Decisions</subject><subject>Graph neural networks</subject><subject>Graph theory</subject><subject>Graphical representations</subject><subject>Identification methods</subject><subject>Labeling</subject><subject>Linear transformations</subject><subject>Neural networks</subject><subject>Nodes</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi70KwjAYAIMgWLTvEHAuxMT-7KXVqZMiuJRgvtrUkNR8Kfr4dugDON1wdysScSEOSXHkfENixIExxrOcp6mIyL36jkZqS09ejj1tYPLSzAgf519Ig6NXq8BjkFbRG-hnH0AtcQ0yTB6QznvjFNDSSETd6YcM2tkdWXfSIMQLt2RfV5fynIzevSfA0A5u8nZWLRcpy9KsyDPxX_UDETFCjA</recordid><startdate>20200202</startdate><enddate>20200202</enddate><creator>Li, Xiaoxiao</creator><creator>Saude, Joao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200202</creationdate><title>Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification</title><author>Li, Xiaoxiao ; Saude, Joao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_23506568763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Classification</topic><topic>Debugging</topic><topic>Decisions</topic><topic>Graph neural networks</topic><topic>Graph theory</topic><topic>Graphical representations</topic><topic>Identification methods</topic><topic>Labeling</topic><topic>Linear transformations</topic><topic>Neural networks</topic><topic>Nodes</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Xiaoxiao</creatorcontrib><creatorcontrib>Saude, Joao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Xiaoxiao</au><au>Saude, Joao</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification</atitle><jtitle>arXiv.org</jtitle><date>2020-02-02</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Real data collected from different applications that have additional topological structures and connection information are amenable to be represented as a weighted graph. Considering the node labeling problem, Graph Neural Networks (GNNs) is a powerful tool, which can mimic experts' decision on node labeling. GNNs combine node features, connection patterns, and graph structure by using a neural network to embed node information and pass it through edges in the graph. We want to identify the patterns in the input data used by the GNN model to make a decision and examine if the model works as we desire. However, due to the complex data representation and non-linear transformations, explaining decisions made by GNNs is challenging. In this work, we propose new graph features' explanation methods to identify the informative components and important node features. Besides, we propose a pipeline to identify the key factors used for node classification. We use four datasets (two synthetic and two real) to validate our methods. Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation and disentangle different features in the graphs. Furthermore, our explanation methods can be used for understanding data, debugging GNN models, and examine model decisions.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2350656876
source Publicly Available Content Database
subjects Classification
Debugging
Decisions
Graph neural networks
Graph theory
Graphical representations
Identification methods
Labeling
Linear transformations
Neural networks
Nodes
title Explain Graph Neural Networks to Understand Weighted Graph Features in Node Classification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T02%3A24%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Explain%20Graph%20Neural%20Networks%20to%20Understand%20Weighted%20Graph%20Features%20in%20Node%20Classification&rft.jtitle=arXiv.org&rft.au=Li,%20Xiaoxiao&rft.date=2020-02-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2350656876%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_23506568763%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2350656876&rft_id=info:pmid/&rfr_iscdi=true