Loading…

Locality preserving dense graph convolutional networks with graph context-aware node representations

Graph convolutional networks (GCNs) have been widely used for representation learning on graph data, which can capture structural patterns on a graph via specifically designed convolution and readout operations. In many graph classification applications, GCN-based approaches have outperformed tradit...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks 2021-11, Vol.143, p.108-120
Main Authors: Liu, Wenfeng, Gong, Maoguo, Tang, Zedong, Qin, A.K., Sheng, Kai, Xu, Mingliang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph convolutional networks (GCNs) have been widely used for representation learning on graph data, which can capture structural patterns on a graph via specifically designed convolution and readout operations. In many graph classification applications, GCN-based approaches have outperformed traditional methods. However, most of the existing GCNs are inefficient to preserve local information of graphs — a limitation that is especially problematic for graph classification. In this work, we propose a locality-preserving dense GCN with graph context-aware node representations. Specifically, our proposed model incorporates a local node feature reconstruction module to preserve initial node features into node representations, which is realized via a simple but effective encoder–decoder mechanism. To capture local structural patterns in neighborhoods representing different ranges of locality, dense connectivity is introduced to connect each convolutional layer and its corresponding readout with all previous convolutional layers. To enhance node representativeness, the output of each convolutional layer is concatenated with the output of the previous layer’s readout to form a global context-aware node representation. In addition, a self-attention module is introduced to aggregate layer-wise representations to form the final graph-level representation. Experiments on benchmark datasets demonstrate the superiority of the proposed model over state-of-the-art methods in terms of classification accuracy.
ISSN:0893-6080
1879-2782
DOI:10.1016/j.neunet.2021.05.031