Loading…

Differential privacy in deep learning: A literature survey

The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key t...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2024-07, Vol.589, p.127663, Article 127663
Main Authors: Pan, Ke, Ong, Yew-Soon, Gong, Maoguo, Li, Hui, Qin, A.K., Gao, Yuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key technique in the privacy preservation field, which has drawn much attention owing to its capability of providing rigorous and provable privacy guarantees for training data. Training deep learning models in a differentially private manner is a topic that is gaining traction as this alleviates the reconstruction and inference of sensitive information effectively. Taking this cue, in this paper, we present here a comprehensive and systematic study on differentially private deep learning from the facets of privacy attack and privacy preservation. We explore a new taxonomy to analyze the privacy attacks faced in deep learning and then survey the type of privacy preservation based on differential privacy to tackle such privacy attacks in deep learning. Finally, we propose the first probe into the real-world application of differentially private deep learning, and then conclude with several potential future research avenues. This survey provides promising directions for protecting sensitive information in training data via differential privacy during deep learning model training.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2024.127663