Loading…

Equipping Federated Graph Neural Networks with Structure-aware Group Fairness

Graph Neural Networks (GNNs) are used for graph data processing across various domains. Centralized training of GNNs often faces challenges due to privacy and regulatory issues, making federated learning (FL) a preferred solution in a distributed paradigm. However, GNNs may inherit biases from train...

Full description

Saved in:
Bibliographic Details
Main Authors: Cui, Nan, Wang, Xiuling, Wang, Wendy Hui, Chen, Violet, Ning, Yue
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Graph Neural Networks (GNNs) are used for graph data processing across various domains. Centralized training of GNNs often faces challenges due to privacy and regulatory issues, making federated learning (FL) a preferred solution in a distributed paradigm. However, GNNs may inherit biases from training data, causing these biases to propagate to the global model in distributed scenarios. To address this issue, we introduce \mathrm{F}^{2}GNN, a Fair Federated Graph Neural Network, to enhance group fairness. Recognizing that bias originates from both data and algorithms, \mathrm{F}^{2}GNN aims to mitigate both types of bias under federated settings. We offer theoretical insights into the relationship between data bias and statistical fairness metrics in GNNs. Building on our theoretical analysis, \mathrm{F}^{2}GNN features a fairness-aware local model update scheme and a fairness-weighted global model update scheme, considering both data bias and local model fairness during aggregation. Empirical evaluations show \mathrm{F}^{2}GNN outperforms SOTA baselines in fairness and accuracy.
ISSN:2374-8486
DOI:10.1109/ICDM58522.2023.00111