Loading…
Hierarchical Graph Neural Network Based on Semi-Implicit Variational Inference
Graph neural network (GNN) has obtained outstanding achievements in relational data. However, these data have uncertain properties, for example, spurious edges may be included. Recently, variational graph autoencoder (VGAE) has been proposed to solve this problem. However, the distributional assumpt...
Saved in:
Published in: | IEEE transactions on cognitive and developmental systems 2023-06, Vol.15 (2), p.887-895 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Graph neural network (GNN) has obtained outstanding achievements in relational data. However, these data have uncertain properties, for example, spurious edges may be included. Recently, variational graph autoencoder (VGAE) has been proposed to solve this problem. However, the distributional assumptions in the variational family restrict the variational inference (VI) flexibility and they define variational families using mean field, which can not capture complex posterior distributional. To solve the above question, in this article, we proposed a novel GNN model based on semi-implicit VI (SIVI), which can embed the node to the latent space to improve VI flexibility and enhance VI expressiveness with mixing distribution. Specifically, to approximate the true posterior, a variational posterior was given utilizing a semi-implicit hierarchical variational framework, which can model complex posterior. Moreover, an iterative decoder is used to better capture graph properties. Besides, due to the hierarchical structure in our model, it can incorporation neighbor information between nodes. Experiments on multiple data sets, Our method has achieved state-of-the-art results compared to other similar methods. Particularly, on the citation data set Citeseer without features, our method outperforms VGAE by 9%. |
---|---|
ISSN: | 2379-8920 2379-8939 |
DOI: | 10.1109/TCDS.2022.3193398 |