Loading…

Privacy-Preserving Remote Sensing Image Generation and Classification with Differentially Private GANs

Generative Adversarial Networks (GANs) have demonstrated their remarkable capacity to learn the training data distribution and produce high-quality synthetic images, which have been widely adopted in image recognition tasks in remote sensing research communities. However, previous work has shown tha...

Full description

Saved in:
Bibliographic Details
Published in:IEEE sensors journal 2023-09, Vol.23 (18), p.1-1
Main Authors: Huang, Yujian, Cao, Lei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Generative Adversarial Networks (GANs) have demonstrated their remarkable capacity to learn the training data distribution and produce high-quality synthetic images, which have been widely adopted in image recognition tasks in remote sensing research communities. However, previous work has shown that using GANs does not preserves privacy, e.g., being susceptible to membership attacks, while sensitive information is vulnerable to nefarious activities. This drawback is considered severe in remote sensing communities, in which critical researches highly value the security and privacy of the image content. Thus, to publicly share sensitive data for supporting critical researches, in the meantime guarantee the model accuracy trained from privacy-preserving data, this work develops GANs within the Differential Privacy (DP) framework, and proposes a Remote Sensing Differentially Private Generative Adversarial Networks (RS-DPGANs) for both privacy-preserving synthetic image generation and classification. Our RS-DPGANs is capable of releasing safe-version of synthetic data meanwhile obtaining favorable classification r esults, w hich gives rigorous guarantees for the privacy of sensitive data and balance between the model accuracy and privacy-preserving degree. Extensive empirical and statistical results both confirm the effectiveness of our framework.
ISSN:1530-437X
1558-1748
DOI:10.1109/JSEN.2023.3267001