Loading…

MGPOOL: multi-granular graph pooling convolutional networks representation learning

Graph convolutional network (GCN) nowadays become new state-of-the-art for networks representation learning. Most of the existing methods are single-granular methods that failed to analyze the graph at multi-granular views so as to lose abundant information. Advanced graph pooling techniques can be...

Full description

Saved in:
Bibliographic Details
Published in:International journal of machine learning and cybernetics 2022-03, Vol.13 (3), p.783-796
Main Authors: Xin, Zhenghua, Chen, Guolong, Chen, Jie, Zhao, Shu, Wang, Zongchao, Fang, Aidong, Pan, Zhenggao, Cui, Lin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph convolutional network (GCN) nowadays become new state-of-the-art for networks representation learning. Most of the existing methods are single-granular methods that failed to analyze the graph at multi-granular views so as to lose abundant information. Advanced graph pooling techniques can be successfully benefiting from semi-supervised networks representation learning. How to capture multi-granular information through the graph pooling techniques on graphs without additional input features is a great challenge. Technically speaking, we propose our graph node embeddings framework, MGPOOL. First, inspired by the triadic influence learning, we use the 3-clique algorithm to coarsen the graph repeatedly. Three nodes of a triangle form a supernode. We treat the supernodes as key nodes for our graph pooling operations. That keeps the local relationship. These graphs capture consecutive 3-cliques from the finest to the coarsest to preserve global structural relationships. Second, we use the unsupervised single-granular algorithms on the coarsest graph to acquire its node embeddings. Based on that, our graph pooling operations combining with that node embeddings to generate another same size of the coarsest graph. This makes up for the uniqueness of the coarsening result at a time and expands the receptive field for each node to avoid high-proximity information lost. Third, we take the embeddings, the coarsest graph and new coarsest graph as uniform input of MGPOOL. We restore the coarsest graph to the original graph to get the original graph node embeddings. The experimental results on four public datasets, Wiki, Cora, CiteSeer, and DBLP, demonstrate that our method has a better Macro F1 value for node classification tasks and AUC and Ap value for link prediction than the baseline methods.
ISSN:1868-8071
1868-808X
DOI:10.1007/s13042-021-01328-2