Loading…
Shredder: Learning Noise Distributions to Protect Inference Privacy
A wide variety of deep neural applications increasingly rely on the cloud to perform their compute-heavy inference. This common practice requires sending private and privileged data over the network to remote servers, exposing it to the service provider and potentially compromising its privacy. Even...
Saved in:
Published in: | arXiv.org 2020-10 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Mireshghallah, Fatemehsadat Taram, Mohammadkazem Ramrakhyani, Prakash Tullsen, Dean Esmaeilzadeh, Hadi |
description | A wide variety of deep neural applications increasingly rely on the cloud to perform their compute-heavy inference. This common practice requires sending private and privileged data over the network to remote servers, exposing it to the service provider and potentially compromising its privacy. Even if the provider is trusted, the data can still be vulnerable over communication channels or via side-channel attacks in the cloud. To that end, this paper aims to reduce the information content of the communicated data with as little as possible compromise on the inference accuracy by making the sent data noisy. An undisciplined addition of noise can significantly reduce the accuracy of inference, rendering the service unusable. To address this challenge, this paper devises Shredder, an end-to-end framework, that, without altering the topology or the weights of a pre-trained network, learns additive noise distributions that significantly reduce the information content of communicated data while maintaining the inference accuracy. The key idea is finding the additive noise distributions by casting it as a disjoint offline learning process with a loss function that strikes a balance between accuracy and information degradation. The loss function also exposes a knob for a disciplined and controlled asymmetric trade-off between privacy and accuracy. Experimentation with six real-world DNNs from text processing and image classification shows that Shredder reduces the mutual information between the input and the communicated data to the cloud by 74.70% compared to the original execution while only sacrificing 1.58% loss in accuracy. On average, Shredder also offers a speedup of 1.79x over Wi-Fi and 2.17x over LTE compared to cloud-only execution when using an off-the-shelf mobile GPU (Tegra X2) on the edge. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2231642883</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2231642883</sourcerecordid><originalsourceid>FETCH-proquest_journals_22316428833</originalsourceid><addsrcrecordid>eNqNisEKgkAUAJcgSMp_WOgs6Fs16WpFQUSQdzF91krs1nu7QX-fhz6g08DMTEQASiVRkQLMRMg8xHEM-QqyTAWivNwJuw5pLY_YkNHmJk9WM8qNZkf66p22hqWz8kzWYevkwfRIaFocjX437Wchpn3zYAx_nIvlbluV--hJ9uWRXT1YT2ZMNYBK8hSKQqn_ri_5mzoz</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2231642883</pqid></control><display><type>article</type><title>Shredder: Learning Noise Distributions to Protect Inference Privacy</title><source>Publicly Available Content Database</source><creator>Mireshghallah, Fatemehsadat ; Taram, Mohammadkazem ; Ramrakhyani, Prakash ; Tullsen, Dean ; Esmaeilzadeh, Hadi</creator><creatorcontrib>Mireshghallah, Fatemehsadat ; Taram, Mohammadkazem ; Ramrakhyani, Prakash ; Tullsen, Dean ; Esmaeilzadeh, Hadi</creatorcontrib><description>A wide variety of deep neural applications increasingly rely on the cloud to perform their compute-heavy inference. This common practice requires sending private and privileged data over the network to remote servers, exposing it to the service provider and potentially compromising its privacy. Even if the provider is trusted, the data can still be vulnerable over communication channels or via side-channel attacks in the cloud. To that end, this paper aims to reduce the information content of the communicated data with as little as possible compromise on the inference accuracy by making the sent data noisy. An undisciplined addition of noise can significantly reduce the accuracy of inference, rendering the service unusable. To address this challenge, this paper devises Shredder, an end-to-end framework, that, without altering the topology or the weights of a pre-trained network, learns additive noise distributions that significantly reduce the information content of communicated data while maintaining the inference accuracy. The key idea is finding the additive noise distributions by casting it as a disjoint offline learning process with a loss function that strikes a balance between accuracy and information degradation. The loss function also exposes a knob for a disciplined and controlled asymmetric trade-off between privacy and accuracy. Experimentation with six real-world DNNs from text processing and image classification shows that Shredder reduces the mutual information between the input and the communicated data to the cloud by 74.70% compared to the original execution while only sacrificing 1.58% loss in accuracy. On average, Shredder also offers a speedup of 1.79x over Wi-Fi and 2.17x over LTE compared to cloud-only execution when using an off-the-shelf mobile GPU (Tegra X2) on the edge.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Accuracy ; Cloud computing ; Communication channels ; Experimentation ; Inference ; Noise ; Noise reduction ; Privacy ; Servers ; Tensors ; Topology</subject><ispartof>arXiv.org, 2020-10</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2231642883?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Mireshghallah, Fatemehsadat</creatorcontrib><creatorcontrib>Taram, Mohammadkazem</creatorcontrib><creatorcontrib>Ramrakhyani, Prakash</creatorcontrib><creatorcontrib>Tullsen, Dean</creatorcontrib><creatorcontrib>Esmaeilzadeh, Hadi</creatorcontrib><title>Shredder: Learning Noise Distributions to Protect Inference Privacy</title><title>arXiv.org</title><description>A wide variety of deep neural applications increasingly rely on the cloud to perform their compute-heavy inference. This common practice requires sending private and privileged data over the network to remote servers, exposing it to the service provider and potentially compromising its privacy. Even if the provider is trusted, the data can still be vulnerable over communication channels or via side-channel attacks in the cloud. To that end, this paper aims to reduce the information content of the communicated data with as little as possible compromise on the inference accuracy by making the sent data noisy. An undisciplined addition of noise can significantly reduce the accuracy of inference, rendering the service unusable. To address this challenge, this paper devises Shredder, an end-to-end framework, that, without altering the topology or the weights of a pre-trained network, learns additive noise distributions that significantly reduce the information content of communicated data while maintaining the inference accuracy. The key idea is finding the additive noise distributions by casting it as a disjoint offline learning process with a loss function that strikes a balance between accuracy and information degradation. The loss function also exposes a knob for a disciplined and controlled asymmetric trade-off between privacy and accuracy. Experimentation with six real-world DNNs from text processing and image classification shows that Shredder reduces the mutual information between the input and the communicated data to the cloud by 74.70% compared to the original execution while only sacrificing 1.58% loss in accuracy. On average, Shredder also offers a speedup of 1.79x over Wi-Fi and 2.17x over LTE compared to cloud-only execution when using an off-the-shelf mobile GPU (Tegra X2) on the edge.</description><subject>Accuracy</subject><subject>Cloud computing</subject><subject>Communication channels</subject><subject>Experimentation</subject><subject>Inference</subject><subject>Noise</subject><subject>Noise reduction</subject><subject>Privacy</subject><subject>Servers</subject><subject>Tensors</subject><subject>Topology</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNisEKgkAUAJcgSMp_WOgs6Fs16WpFQUSQdzF91krs1nu7QX-fhz6g08DMTEQASiVRkQLMRMg8xHEM-QqyTAWivNwJuw5pLY_YkNHmJk9WM8qNZkf66p22hqWz8kzWYevkwfRIaFocjX437Wchpn3zYAx_nIvlbluV--hJ9uWRXT1YT2ZMNYBK8hSKQqn_ri_5mzoz</recordid><startdate>20201027</startdate><enddate>20201027</enddate><creator>Mireshghallah, Fatemehsadat</creator><creator>Taram, Mohammadkazem</creator><creator>Ramrakhyani, Prakash</creator><creator>Tullsen, Dean</creator><creator>Esmaeilzadeh, Hadi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20201027</creationdate><title>Shredder: Learning Noise Distributions to Protect Inference Privacy</title><author>Mireshghallah, Fatemehsadat ; Taram, Mohammadkazem ; Ramrakhyani, Prakash ; Tullsen, Dean ; Esmaeilzadeh, Hadi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22316428833</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accuracy</topic><topic>Cloud computing</topic><topic>Communication channels</topic><topic>Experimentation</topic><topic>Inference</topic><topic>Noise</topic><topic>Noise reduction</topic><topic>Privacy</topic><topic>Servers</topic><topic>Tensors</topic><topic>Topology</topic><toplevel>online_resources</toplevel><creatorcontrib>Mireshghallah, Fatemehsadat</creatorcontrib><creatorcontrib>Taram, Mohammadkazem</creatorcontrib><creatorcontrib>Ramrakhyani, Prakash</creatorcontrib><creatorcontrib>Tullsen, Dean</creatorcontrib><creatorcontrib>Esmaeilzadeh, Hadi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mireshghallah, Fatemehsadat</au><au>Taram, Mohammadkazem</au><au>Ramrakhyani, Prakash</au><au>Tullsen, Dean</au><au>Esmaeilzadeh, Hadi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Shredder: Learning Noise Distributions to Protect Inference Privacy</atitle><jtitle>arXiv.org</jtitle><date>2020-10-27</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>A wide variety of deep neural applications increasingly rely on the cloud to perform their compute-heavy inference. This common practice requires sending private and privileged data over the network to remote servers, exposing it to the service provider and potentially compromising its privacy. Even if the provider is trusted, the data can still be vulnerable over communication channels or via side-channel attacks in the cloud. To that end, this paper aims to reduce the information content of the communicated data with as little as possible compromise on the inference accuracy by making the sent data noisy. An undisciplined addition of noise can significantly reduce the accuracy of inference, rendering the service unusable. To address this challenge, this paper devises Shredder, an end-to-end framework, that, without altering the topology or the weights of a pre-trained network, learns additive noise distributions that significantly reduce the information content of communicated data while maintaining the inference accuracy. The key idea is finding the additive noise distributions by casting it as a disjoint offline learning process with a loss function that strikes a balance between accuracy and information degradation. The loss function also exposes a knob for a disciplined and controlled asymmetric trade-off between privacy and accuracy. Experimentation with six real-world DNNs from text processing and image classification shows that Shredder reduces the mutual information between the input and the communicated data to the cloud by 74.70% compared to the original execution while only sacrificing 1.58% loss in accuracy. On average, Shredder also offers a speedup of 1.79x over Wi-Fi and 2.17x over LTE compared to cloud-only execution when using an off-the-shelf mobile GPU (Tegra X2) on the edge.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2020-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2231642883 |
source | Publicly Available Content Database |
subjects | Accuracy Cloud computing Communication channels Experimentation Inference Noise Noise reduction Privacy Servers Tensors Topology |
title | Shredder: Learning Noise Distributions to Protect Inference Privacy |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T12%3A30%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Shredder:%20Learning%20Noise%20Distributions%20to%20Protect%20Inference%20Privacy&rft.jtitle=arXiv.org&rft.au=Mireshghallah,%20Fatemehsadat&rft.date=2020-10-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2231642883%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_22316428833%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2231642883&rft_id=info:pmid/&rfr_iscdi=true |