Loading…

Deep Learning with Gaussian Differential Privacy

Deep learning models are often trained on datasets that contain sensitive information such as individuals’ shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are speci...

Full description

Saved in:
Bibliographic Details
Published in:Harvard data science review 2020-01, Vol.2020 (23)
Main Authors: Bu, Zhiqi, Dong, Jinshuo, Long, Qi, Su, Weijie J
Format: Article
Language:English
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c2875-1f0558ae748fc9ed71f4d6a519089c2e397e0b1a6b310f3ad26769713b99624c3
cites
container_end_page
container_issue 23
container_start_page
container_title Harvard data science review
container_volume 2020
creator Bu, Zhiqi
Dong, Jinshuo
Long, Qi
Su, Weijie J
description Deep learning models are often trained on datasets that contain sensitive information such as individuals’ shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [ 18 ] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f -differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [ 3 ] did. Our results demonstrate that the f -differential privacy framework allows for a new privacy analysis that improves on the prior analysis [ 3 ], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the TensorFlow Privacy library.
doi_str_mv 10.1162/99608f92.cfc5dd25
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_34cd3d45232041c6bc97fa0e0e02a963</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_34cd3d45232041c6bc97fa0e0e02a963</doaj_id><sourcerecordid>2465756640</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2875-1f0558ae748fc9ed71f4d6a519089c2e397e0b1a6b310f3ad26769713b99624c3</originalsourceid><addsrcrecordid>eNpVz01PwkAQBuCN0QhBfoC3Hr0U93u7FxMDiiQketDzZrofsKS0uKUY_r1VuJA5zGQmeTIvQvcETwiR9FFriYug6cQGK5yj4goNqeQ8p0yw67-5KPJCEDZA47bdYIypwIorfIsGjFFBBNVDhGfe77Klh1THepX9xP06m0PXthHqbBZD8MnX-whV9pHiAezxDt0EqFo_PvcR-np9-Zy-5cv3-WL6vMwtLZTIScBCFOAVL4LV3ikSuJMgiMaFttQzrTwuCciSERwYOCqV1Iqwss9FuWUjtDi5roGN2aW4hXQ0DUTzv2jSykDaR1t5w7h1zHFBGcWcWFlarQJg3xcFLVlvPZ2sXVduvbN9ogTVBXp5qeParJqD6V8SjKseeDgDqfnufLs329haX1VQ-6ZrDeVSKCElx-wXmgh5uA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2465756640</pqid></control><display><type>article</type><title>Deep Learning with Gaussian Differential Privacy</title><source>Directory of Open Access Journals</source><creator>Bu, Zhiqi ; Dong, Jinshuo ; Long, Qi ; Su, Weijie J</creator><creatorcontrib>Bu, Zhiqi ; Dong, Jinshuo ; Long, Qi ; Su, Weijie J</creatorcontrib><description>Deep learning models are often trained on datasets that contain sensitive information such as individuals’ shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [ 18 ] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f -differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [ 3 ] did. Our results demonstrate that the f -differential privacy framework allows for a new privacy analysis that improves on the prior analysis [ 3 ], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the TensorFlow Privacy library.</description><identifier>ISSN: 2688-8513</identifier><identifier>EISSN: 2644-2353</identifier><identifier>DOI: 10.1162/99608f92.cfc5dd25</identifier><identifier>PMID: 33251529</identifier><language>eng</language><publisher>The MIT Press</publisher><ispartof>Harvard data science review, 2020-01, Vol.2020 (23)</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2875-1f0558ae748fc9ed71f4d6a519089c2e397e0b1a6b310f3ad26769713b99624c3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,777,781,861,882,2096,27905,27906</link.rule.ids></links><search><creatorcontrib>Bu, Zhiqi</creatorcontrib><creatorcontrib>Dong, Jinshuo</creatorcontrib><creatorcontrib>Long, Qi</creatorcontrib><creatorcontrib>Su, Weijie J</creatorcontrib><title>Deep Learning with Gaussian Differential Privacy</title><title>Harvard data science review</title><description>Deep learning models are often trained on datasets that contain sensitive information such as individuals’ shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [ 18 ] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f -differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [ 3 ] did. Our results demonstrate that the f -differential privacy framework allows for a new privacy analysis that improves on the prior analysis [ 3 ], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the TensorFlow Privacy library.</description><issn>2688-8513</issn><issn>2644-2353</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpVz01PwkAQBuCN0QhBfoC3Hr0U93u7FxMDiiQketDzZrofsKS0uKUY_r1VuJA5zGQmeTIvQvcETwiR9FFriYug6cQGK5yj4goNqeQ8p0yw67-5KPJCEDZA47bdYIypwIorfIsGjFFBBNVDhGfe77Klh1THepX9xP06m0PXthHqbBZD8MnX-whV9pHiAezxDt0EqFo_PvcR-np9-Zy-5cv3-WL6vMwtLZTIScBCFOAVL4LV3ikSuJMgiMaFttQzrTwuCciSERwYOCqV1Iqwss9FuWUjtDi5roGN2aW4hXQ0DUTzv2jSykDaR1t5w7h1zHFBGcWcWFlarQJg3xcFLVlvPZ2sXVduvbN9ogTVBXp5qeParJqD6V8SjKseeDgDqfnufLs329haX1VQ-6ZrDeVSKCElx-wXmgh5uA</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Bu, Zhiqi</creator><creator>Dong, Jinshuo</creator><creator>Long, Qi</creator><creator>Su, Weijie J</creator><general>The MIT Press</general><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20200101</creationdate><title>Deep Learning with Gaussian Differential Privacy</title><author>Bu, Zhiqi ; Dong, Jinshuo ; Long, Qi ; Su, Weijie J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2875-1f0558ae748fc9ed71f4d6a519089c2e397e0b1a6b310f3ad26769713b99624c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bu, Zhiqi</creatorcontrib><creatorcontrib>Dong, Jinshuo</creatorcontrib><creatorcontrib>Long, Qi</creatorcontrib><creatorcontrib>Su, Weijie J</creatorcontrib><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Harvard data science review</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bu, Zhiqi</au><au>Dong, Jinshuo</au><au>Long, Qi</au><au>Su, Weijie J</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning with Gaussian Differential Privacy</atitle><jtitle>Harvard data science review</jtitle><date>2020-01-01</date><risdate>2020</risdate><volume>2020</volume><issue>23</issue><issn>2688-8513</issn><eissn>2644-2353</eissn><abstract>Deep learning models are often trained on datasets that contain sensitive information such as individuals’ shopping transactions, personal contacts, and medical records. An increasingly important line of work therefore has sought to train neural networks subject to privacy constraints that are specified by differential privacy or its divergence-based relaxations. These privacy definitions, however, have weaknesses in handling certain important primitives (composition and subsampling), thereby giving loose or complicated privacy analyses of training neural networks. In this paper, we consider a recently proposed privacy definition termed f-differential privacy [ 18 ] for a refined privacy analysis of training neural networks. Leveraging the appealing properties of f -differential privacy in handling composition and subsampling, this paper derives analytically tractable expressions for the privacy guarantees of both stochastic gradient descent and Adam used in training deep neural networks, without the need of developing sophisticated techniques as [ 3 ] did. Our results demonstrate that the f -differential privacy framework allows for a new privacy analysis that improves on the prior analysis [ 3 ], which in turn suggests tuning certain parameters of neural networks for a better prediction accuracy without violating the privacy budget. These theoretically derived improvements are confirmed by our experiments in a range of tasks in image classification, text classification, and recommender systems. Python code to calculate the privacy cost for these experiments is publicly available in the TensorFlow Privacy library.</abstract><pub>The MIT Press</pub><pmid>33251529</pmid><doi>10.1162/99608f92.cfc5dd25</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2688-8513
ispartof Harvard data science review, 2020-01, Vol.2020 (23)
issn 2688-8513
2644-2353
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_34cd3d45232041c6bc97fa0e0e02a963
source Directory of Open Access Journals
title Deep Learning with Gaussian Differential Privacy
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T17%3A13%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning%20with%20Gaussian%20Differential%20Privacy&rft.jtitle=Harvard%20data%20science%20review&rft.au=Bu,%20Zhiqi&rft.date=2020-01-01&rft.volume=2020&rft.issue=23&rft.issn=2688-8513&rft.eissn=2644-2353&rft_id=info:doi/10.1162/99608f92.cfc5dd25&rft_dat=%3Cproquest_doaj_%3E2465756640%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2875-1f0558ae748fc9ed71f4d6a519089c2e397e0b1a6b310f3ad26769713b99624c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2465756640&rft_id=info:pmid/33251529&rfr_iscdi=true