Loading…
Differential privacy in deep learning: A literature survey
The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key t...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2024-07, Vol.589, p.127663, Article 127663 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133 |
---|---|
cites | cdi_FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133 |
container_end_page | |
container_issue | |
container_start_page | 127663 |
container_title | Neurocomputing (Amsterdam) |
container_volume | 589 |
creator | Pan, Ke Ong, Yew-Soon Gong, Maoguo Li, Hui Qin, A.K. Gao, Yuan |
description | The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key technique in the privacy preservation field, which has drawn much attention owing to its capability of providing rigorous and provable privacy guarantees for training data. Training deep learning models in a differentially private manner is a topic that is gaining traction as this alleviates the reconstruction and inference of sensitive information effectively. Taking this cue, in this paper, we present here a comprehensive and systematic study on differentially private deep learning from the facets of privacy attack and privacy preservation. We explore a new taxonomy to analyze the privacy attacks faced in deep learning and then survey the type of privacy preservation based on differential privacy to tackle such privacy attacks in deep learning. Finally, we propose the first probe into the real-world application of differentially private deep learning, and then conclude with several potential future research avenues. This survey provides promising directions for protecting sensitive information in training data via differential privacy during deep learning model training. |
doi_str_mv | 10.1016/j.neucom.2024.127663 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_neucom_2024_127663</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S092523122400434X</els_id><sourcerecordid>S092523122400434X</sourcerecordid><originalsourceid>FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133</originalsourceid><addsrcrecordid>eNp9j8FKAzEURYMoWKt_4CI_MGNeMs0kXQilWhUKbnQdMm9eJGU6Lcm00L-3ZVy7uqtzOYexRxAlCNBPm7KnA-62pRSyKkHWWqsrNgFTy8JIo6_ZRFg5K6QCecvuct4IATVIO2HzlxgCJeqH6Du-T_Ho8cRjz1uiPe_Ipz72P3O-4F0cKPnhkIjnQzrS6Z7dBN9levjbKftevX4t34v159vHcrEuUAk9FIDh4iO1RqgUkgm2kQIC-qqx3s6MVGQMNE2tWq9NO7Pog0Vra28MgVJTVo2_mHY5JwrurLn16eRAuEu_27ix31363dh_xp5HjM5ux0jJZYzUI7UxEQ6u3cX_D34BmmhlwA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Differential privacy in deep learning: A literature survey</title><source>Elsevier</source><creator>Pan, Ke ; Ong, Yew-Soon ; Gong, Maoguo ; Li, Hui ; Qin, A.K. ; Gao, Yuan</creator><creatorcontrib>Pan, Ke ; Ong, Yew-Soon ; Gong, Maoguo ; Li, Hui ; Qin, A.K. ; Gao, Yuan</creatorcontrib><description>The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key technique in the privacy preservation field, which has drawn much attention owing to its capability of providing rigorous and provable privacy guarantees for training data. Training deep learning models in a differentially private manner is a topic that is gaining traction as this alleviates the reconstruction and inference of sensitive information effectively. Taking this cue, in this paper, we present here a comprehensive and systematic study on differentially private deep learning from the facets of privacy attack and privacy preservation. We explore a new taxonomy to analyze the privacy attacks faced in deep learning and then survey the type of privacy preservation based on differential privacy to tackle such privacy attacks in deep learning. Finally, we propose the first probe into the real-world application of differentially private deep learning, and then conclude with several potential future research avenues. This survey provides promising directions for protecting sensitive information in training data via differential privacy during deep learning model training.</description><identifier>ISSN: 0925-2312</identifier><identifier>EISSN: 1872-8286</identifier><identifier>DOI: 10.1016/j.neucom.2024.127663</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Deep learning ; Differential privacy ; Privacy attack ; Privacy preservation</subject><ispartof>Neurocomputing (Amsterdam), 2024-07, Vol.589, p.127663, Article 127663</ispartof><rights>2024 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133</citedby><cites>FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133</cites><orcidid>0000-0002-0415-8556 ; 0000-0003-1215-558X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27898,27899</link.rule.ids></links><search><creatorcontrib>Pan, Ke</creatorcontrib><creatorcontrib>Ong, Yew-Soon</creatorcontrib><creatorcontrib>Gong, Maoguo</creatorcontrib><creatorcontrib>Li, Hui</creatorcontrib><creatorcontrib>Qin, A.K.</creatorcontrib><creatorcontrib>Gao, Yuan</creatorcontrib><title>Differential privacy in deep learning: A literature survey</title><title>Neurocomputing (Amsterdam)</title><description>The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key technique in the privacy preservation field, which has drawn much attention owing to its capability of providing rigorous and provable privacy guarantees for training data. Training deep learning models in a differentially private manner is a topic that is gaining traction as this alleviates the reconstruction and inference of sensitive information effectively. Taking this cue, in this paper, we present here a comprehensive and systematic study on differentially private deep learning from the facets of privacy attack and privacy preservation. We explore a new taxonomy to analyze the privacy attacks faced in deep learning and then survey the type of privacy preservation based on differential privacy to tackle such privacy attacks in deep learning. Finally, we propose the first probe into the real-world application of differentially private deep learning, and then conclude with several potential future research avenues. This survey provides promising directions for protecting sensitive information in training data via differential privacy during deep learning model training.</description><subject>Deep learning</subject><subject>Differential privacy</subject><subject>Privacy attack</subject><subject>Privacy preservation</subject><issn>0925-2312</issn><issn>1872-8286</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9j8FKAzEURYMoWKt_4CI_MGNeMs0kXQilWhUKbnQdMm9eJGU6Lcm00L-3ZVy7uqtzOYexRxAlCNBPm7KnA-62pRSyKkHWWqsrNgFTy8JIo6_ZRFg5K6QCecvuct4IATVIO2HzlxgCJeqH6Du-T_Ho8cRjz1uiPe_Ipz72P3O-4F0cKPnhkIjnQzrS6Z7dBN9levjbKftevX4t34v159vHcrEuUAk9FIDh4iO1RqgUkgm2kQIC-qqx3s6MVGQMNE2tWq9NO7Pog0Vra28MgVJTVo2_mHY5JwrurLn16eRAuEu_27ix31363dh_xp5HjM5ux0jJZYzUI7UxEQ6u3cX_D34BmmhlwA</recordid><startdate>20240707</startdate><enddate>20240707</enddate><creator>Pan, Ke</creator><creator>Ong, Yew-Soon</creator><creator>Gong, Maoguo</creator><creator>Li, Hui</creator><creator>Qin, A.K.</creator><creator>Gao, Yuan</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0415-8556</orcidid><orcidid>https://orcid.org/0000-0003-1215-558X</orcidid></search><sort><creationdate>20240707</creationdate><title>Differential privacy in deep learning: A literature survey</title><author>Pan, Ke ; Ong, Yew-Soon ; Gong, Maoguo ; Li, Hui ; Qin, A.K. ; Gao, Yuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Deep learning</topic><topic>Differential privacy</topic><topic>Privacy attack</topic><topic>Privacy preservation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pan, Ke</creatorcontrib><creatorcontrib>Ong, Yew-Soon</creatorcontrib><creatorcontrib>Gong, Maoguo</creatorcontrib><creatorcontrib>Li, Hui</creatorcontrib><creatorcontrib>Qin, A.K.</creatorcontrib><creatorcontrib>Gao, Yuan</creatorcontrib><collection>CrossRef</collection><jtitle>Neurocomputing (Amsterdam)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pan, Ke</au><au>Ong, Yew-Soon</au><au>Gong, Maoguo</au><au>Li, Hui</au><au>Qin, A.K.</au><au>Gao, Yuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Differential privacy in deep learning: A literature survey</atitle><jtitle>Neurocomputing (Amsterdam)</jtitle><date>2024-07-07</date><risdate>2024</risdate><volume>589</volume><spage>127663</spage><pages>127663-</pages><artnum>127663</artnum><issn>0925-2312</issn><eissn>1872-8286</eissn><abstract>The widespread adoption of deep learning is facilitated in part by the availability of large-scale data for training desirable models. However, these data may involve sensitive personal information, which raises privacy concerns for data providers. Differential privacy has been thought of as a key technique in the privacy preservation field, which has drawn much attention owing to its capability of providing rigorous and provable privacy guarantees for training data. Training deep learning models in a differentially private manner is a topic that is gaining traction as this alleviates the reconstruction and inference of sensitive information effectively. Taking this cue, in this paper, we present here a comprehensive and systematic study on differentially private deep learning from the facets of privacy attack and privacy preservation. We explore a new taxonomy to analyze the privacy attacks faced in deep learning and then survey the type of privacy preservation based on differential privacy to tackle such privacy attacks in deep learning. Finally, we propose the first probe into the real-world application of differentially private deep learning, and then conclude with several potential future research avenues. This survey provides promising directions for protecting sensitive information in training data via differential privacy during deep learning model training.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.neucom.2024.127663</doi><orcidid>https://orcid.org/0000-0002-0415-8556</orcidid><orcidid>https://orcid.org/0000-0003-1215-558X</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0925-2312 |
ispartof | Neurocomputing (Amsterdam), 2024-07, Vol.589, p.127663, Article 127663 |
issn | 0925-2312 1872-8286 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_neucom_2024_127663 |
source | Elsevier |
subjects | Deep learning Differential privacy Privacy attack Privacy preservation |
title | Differential privacy in deep learning: A literature survey |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-27T06%3A54%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Differential%20privacy%20in%20deep%20learning:%20A%20literature%20survey&rft.jtitle=Neurocomputing%20(Amsterdam)&rft.au=Pan,%20Ke&rft.date=2024-07-07&rft.volume=589&rft.spage=127663&rft.pages=127663-&rft.artnum=127663&rft.issn=0925-2312&rft.eissn=1872-8286&rft_id=info:doi/10.1016/j.neucom.2024.127663&rft_dat=%3Celsevier_cross%3ES092523122400434X%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c306t-1cf7663266c143ce8f9b201fca4b9a95823e881bb73da68d59caf9c997a88e133%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |