Loading…
MobileGCN applied to low-dimensional node feature learning
•This paper proposes a novel Graph Convolutional Networks (GCNs), namely MobileGCN, for semi-supervised learning on graphs represented by low-dimensional node feature space.•MobileGCN is an extension of MobileNet and MobileNet_v2 for learning graph data. All of them are based on Depth-wise Separable...
Saved in:
Published in: | Pattern recognition 2021-04, Vol.112, p.107788, Article 107788 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •This paper proposes a novel Graph Convolutional Networks (GCNs), namely MobileGCN, for semi-supervised learning on graphs represented by low-dimensional node feature space.•MobileGCN is an extension of MobileNet and MobileNet_v2 for learning graph data. All of them are based on Depth-wise Separable Convolution (DSC).•Compared with other GCN models, our model performed on graphs of low-dimensional node features has four advantages: Modularization, Expansibility, Reciprocity, and Robustness.•Performed on three metrics (Accuracy, Macro-f1, and Matthews correlation coefficient (MCC)), Our experiments demonstrate that MobileGCN for graph data can provide state-of-the-art results in both low- and high-dimensional node feature space.
The idea of the paper concentrates on an iterative learning process in Graph Convolution Networks (GCNs) involved in two vital steps: one is a message propagation (message passing) step to aggregate neighboring node features via aggregators performed, and another is an encoding output step to encode node feature representations by using updaters. In our model, we propose a novel affinity-aware encoding as an updater in GCNs, which aggregates the neighboring nodes of a node while updating this node’s features. By utilizing affinity values of our encoding, we order the neighboring nodes to determine the correspondence between encoding functions and the neighboring nodes. Furthermore, to explicitly reduce the model size, we propose a lightweight variant of our updater that integrates Depth-wise Separable Convolution (DSC) into it, namely Depth-wise Separable Graph Convolution (DSGC). Comprehensive experiments conducted on graph data demonstrate that our models’ accuracy improved significantly for graphs of low-dimensional node features. Also, performed in the low-dimensional node feature space we provide state-of-the-art results on two metrics (Macro-f1 and Matthews correlation coefficient (MCC)). Besides, our models are robust when taking different low-dimensional feature selection strategies. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2020.107788 |