Loading…
On the local convergence of GANs with differential Privacy: Gradient clipping and noise perturbation
Generative Adversarial Networks (GANs) are known to implicitly memorize details of sensitive data used to train them. To prevent privacy leakage, many approaches have been conducted. One of the most popular approaches is Differential Private Gradient Descent GANs (DPGD GANs), where the discriminator...
Saved in:
Published in: | Expert systems with applications 2023-08, Vol.224, p.120006, Article 120006 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Generative Adversarial Networks (GANs) are known to implicitly memorize details of sensitive data used to train them. To prevent privacy leakage, many approaches have been conducted. One of the most popular approaches is Differential Private Gradient Descent GANs (DPGD GANs), where the discriminator’s gradients are clipped, and an appropriate random noise is added to the clipped gradients. In this article, a theoretical analysis of DPGD GAN convergence behavior is presented, and the effect of the clipping and noise perturbation operators on convergence properties is examined. It is proved that if the clipping bound is too small, it leads to instability in the training procedure. Then, assuming that the simultaneous/alternating gradient descent method is locally convergent to a fixed point and its operator is L-Lipschitz with L |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2023.120006 |