Loading…
Continual learning with high-order experience replay for dynamic network embedding
Dynamic network embedding (DNE) poses a tough challenge in graph representation learning, especially when confronted with the frequent updates of streaming data. Conventional DNEs primarily resort to parameter updating but perform inadequately on historical networks, resulting in the problem of cata...
Saved in:
Published in: | Pattern recognition 2025-03, Vol.159, p.111093, Article 111093 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Dynamic network embedding (DNE) poses a tough challenge in graph representation learning, especially when confronted with the frequent updates of streaming data. Conventional DNEs primarily resort to parameter updating but perform inadequately on historical networks, resulting in the problem of catastrophic forgetting. To tackle such issues, recent advancements in graph neural networks (GNNs) have explored matrix factorization techniques. However, these approaches encounter difficulties in preserving the global patterns of incremental data. In this paper, we propose CLDNE, a Continual Learning framework specifically designed for Dynamic Network Embedding. At the core of CLDNE lies a streaming graph auto-encoder that effectively captures both global and local patterns of the input graph. To further overcome catastrophic forgetting, CLDNE is equipped with an experience replay buffer and a knowledge distillation module, which preserve high-order historical topology and static historical patterns. We conduct experiments on four dynamic networks using link prediction and node classification tasks to evaluate the effectiveness of CLDNE. The outcomes demonstrate that CLDNE successfully mitigates the catastrophic forgetting problem and reduces training time by 80% without a significant loss in learning new patterns.
•We develop CLDNE to derive generalized representations for dynamic networks.•We propose a streaming graph auto-encoder in CLDNE to learn global patterns.•We design an experience replay buffer to replay high-order historical topology.•We highlight static historical features using a knowledge distillation module. |
---|---|
ISSN: | 0031-3203 |
DOI: | 10.1016/j.patcog.2024.111093 |