Loading…

Dynamic Representation Learning via Recurrent Graph Neural Networks

A large number of real-world systems generate graphs that are structured data aligned with nodes and edges. Graphs are usually dynamic in many scenarios, where nodes or edges keep evolving over time. Recently, graph representation learning (GRL) has received great success in network analysis, which...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on systems, man, and cybernetics. Systems man, and cybernetics. Systems, 2023-02, Vol.53 (2), p.1284-1297
Main Authors: Zhang, Chun-Yang, Yao, Zhi-Liang, Yao, Hong-Yu, Huang, Feng, Chen, C. L. Philip
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A large number of real-world systems generate graphs that are structured data aligned with nodes and edges. Graphs are usually dynamic in many scenarios, where nodes or edges keep evolving over time. Recently, graph representation learning (GRL) has received great success in network analysis, which aims to produce informative and representative features or low-dimensional embeddings by exploring node attributes and network topology. Most state-of-the-art models for dynamic GRL are composed of a static representation learning model and a recurrent neural network (RNN). The former generates the representations of a graph or nodes from one static graph at a discrete time step, while the latter captures the temporal correlation between adjacent graphs. However, the two-stage design ignores the temporal dynamics between contiguous graphs during the learning processing of graph representations. To alleviate this problem, this article proposes a representation learning model for dynamic graphs, called DynGNN. Differently, it is a single-stage model that embeds an RNN into a graph neural network to produce better representations in a compact form. This takes the fusion of temporal and topology correlations into account from low-level to high-level feature learning, enabling the model to capture more fine-grained evolving patterns. From the experimental results on both synthetic and real-world networks, the proposed DynGNN yields significant improvements in multiple tasks compared to the state-of-the-art counterparts.
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2022.3196506