Loading…

Privacy-Enhanced Graph Neural Network for Decentralized Local Graphs

With the ever-growing interest in modeling complex graph structures, graph neural networks (GNN) provide a generalized form of exploiting non-Euclidean space data. However, the global graph may be distributed across multiple data centers, which makes conventional graph-based models incapable of mode...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on information forensics and security 2024, Vol.19, p.1614-1629
Main Authors: Pei, Xinjun, Deng, Xiaoheng, Tian, Shengwei, Liu, Jianqing, Xue, Kaiping
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the ever-growing interest in modeling complex graph structures, graph neural networks (GNN) provide a generalized form of exploiting non-Euclidean space data. However, the global graph may be distributed across multiple data centers, which makes conventional graph-based models incapable of modeling a complete graph structure. This also brings an unprecedented challenge to user privacy protection in distributed graph learning. Due to privacy requirements of legal policies, existing graph-based solutions are difficult to deploy in practice. In this paper, we propose a privacy-preserving graph neural network based on local graph augmentation, named LGA-PGNN, which preserves user privacy by enforcing local differential privacy (LDP) noise into the decentralized local graphs held by different data holders. Moreover, we perform local neighborhood augmentation on low-degree vertices to enhance the expressiveness of the learned model. Specifically, we propose two graph privacy attacks, namely attribute inference attack and link stealing attack, which aim at compromising user privacy. The experimental results demonstrate that LGA-PGNN can effectively mitigate these two attacks and provably avoid potential privacy leakage while ensuring the utility of the learning model.
ISSN:1556-6013
1556-6021
DOI:10.1109/TIFS.2023.3329971