Loading…
Privacy Against a Hypothesis Testing Adversary
Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test c...
Saved in:
Published in: | IEEE transactions on information forensics and security 2019-06, Vol.14 (6), p.1567-1581 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503 |
---|---|
cites | cdi_FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503 |
container_end_page | 1581 |
container_issue | 6 |
container_start_page | 1567 |
container_title | IEEE transactions on information forensics and security |
container_volume | 14 |
creator | Zuxing Li Oechtering, Tobias J. Gunduz, Deniz |
description | Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test carried out by the AD. A management unit (MU) is allowed to manipulate the original data sequence in an online fashion while satisfying an average distortion constraint. The goal of the MU is to maximize the minimal type II probability of error subject to a constraint on the type I probability of error assuming an adversarial Neyman-Pearson test, or to maximize the minimal error probability assuming an adversarial Bayesian test. The asymptotic exponents of the maximum minimal type II probability of error and the maximum minimal error probability are shown to be characterized by a Kullback-Leibler divergence rate and a Chernoff information rate, respectively. Privacy performances of particular management policies, the memoryless hypothesis-aware policy and the hypothesis-unaware policy with memory, are compared. The proposed formulation can also model adversarial example generation with minimal data manipulation to fool classifiers. At last, the results are applied to a smart meter privacy problem, where the user's energy consumption is manipulated by adaptively using a renewable energy source in order to hide user's activity from the energy provider. |
doi_str_mv | 10.1109/TIFS.2018.2882343 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TIFS_2018_2882343</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8540911</ieee_id><sourcerecordid>2188601113</sourcerecordid><originalsourceid>FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503</originalsourceid><addsrcrecordid>eNo9kMFOwzAQRC0EEqXwAYhLJM4Ju7aTOMcIKK1UCSQKV8tOnNYFmmKnRf17HKXqaffwZnZ2CLlFSBCheFjMJu8JBRQJFYIyzs7ICNM0izOgeH7akV2SK-_XAJxjJkYkeXN2r6pDVC6V3fguUtH0sG27lfHWRwvjO7tZRmW9N84rd7gmF4369ubmOMfkY_K8eJzG89eX2WM5jyuW8y5OdU2xUYwxqmtoINNNnvMKlejDgkmVAcNRUwUM87QodFWD0FAwjVylwMYkHnz9n9nutNw6-xPOy1ZZ-WQ_S9m6pfzqVpLyPC9E4O8Hfuva311ILdftzm1CRElRiPA4IgsUDlTlWu-daU6-CLJPJvsaZV-jPNYYNHeDxhpjTrxIORTB8h_fNWyR</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2188601113</pqid></control><display><type>article</type><title>Privacy Against a Hypothesis Testing Adversary</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zuxing Li ; Oechtering, Tobias J. ; Gunduz, Deniz</creator><creatorcontrib>Zuxing Li ; Oechtering, Tobias J. ; Gunduz, Deniz</creatorcontrib><description>Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test carried out by the AD. A management unit (MU) is allowed to manipulate the original data sequence in an online fashion while satisfying an average distortion constraint. The goal of the MU is to maximize the minimal type II probability of error subject to a constraint on the type I probability of error assuming an adversarial Neyman-Pearson test, or to maximize the minimal error probability assuming an adversarial Bayesian test. The asymptotic exponents of the maximum minimal type II probability of error and the maximum minimal error probability are shown to be characterized by a Kullback-Leibler divergence rate and a Chernoff information rate, respectively. Privacy performances of particular management policies, the memoryless hypothesis-aware policy and the hypothesis-unaware policy with memory, are compared. The proposed formulation can also model adversarial example generation with minimal data manipulation to fool classifiers. At last, the results are applied to a smart meter privacy problem, where the user's energy consumption is manipulated by adaptively using a renewable energy source in order to hide user's activity from the energy provider.</description><identifier>ISSN: 1556-6013</identifier><identifier>ISSN: 1556-6021</identifier><identifier>EISSN: 1556-6021</identifier><identifier>DOI: 10.1109/TIFS.2018.2882343</identifier><identifier>CODEN: ITIFA6</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Bayes methods ; Bayesian analysis ; Bayesian test ; Codes ; Data privacy ; Distortion ; Divergence ; Energy consumption ; Error analysis ; Hypotheses ; Hypothesis testing ; information theory ; large deviations ; Measurement uncertainty ; Neyman-Pearson test ; Privacy ; privacy-enhancing technology ; Smart meters ; Testing</subject><ispartof>IEEE transactions on information forensics and security, 2019-06, Vol.14 (6), p.1567-1581</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503</citedby><cites>FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8540911$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>230,314,780,784,885,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-247798$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Zuxing Li</creatorcontrib><creatorcontrib>Oechtering, Tobias J.</creatorcontrib><creatorcontrib>Gunduz, Deniz</creatorcontrib><title>Privacy Against a Hypothesis Testing Adversary</title><title>IEEE transactions on information forensics and security</title><addtitle>TIFS</addtitle><description>Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test carried out by the AD. A management unit (MU) is allowed to manipulate the original data sequence in an online fashion while satisfying an average distortion constraint. The goal of the MU is to maximize the minimal type II probability of error subject to a constraint on the type I probability of error assuming an adversarial Neyman-Pearson test, or to maximize the minimal error probability assuming an adversarial Bayesian test. The asymptotic exponents of the maximum minimal type II probability of error and the maximum minimal error probability are shown to be characterized by a Kullback-Leibler divergence rate and a Chernoff information rate, respectively. Privacy performances of particular management policies, the memoryless hypothesis-aware policy and the hypothesis-unaware policy with memory, are compared. The proposed formulation can also model adversarial example generation with minimal data manipulation to fool classifiers. At last, the results are applied to a smart meter privacy problem, where the user's energy consumption is manipulated by adaptively using a renewable energy source in order to hide user's activity from the energy provider.</description><subject>Bayes methods</subject><subject>Bayesian analysis</subject><subject>Bayesian test</subject><subject>Codes</subject><subject>Data privacy</subject><subject>Distortion</subject><subject>Divergence</subject><subject>Energy consumption</subject><subject>Error analysis</subject><subject>Hypotheses</subject><subject>Hypothesis testing</subject><subject>information theory</subject><subject>large deviations</subject><subject>Measurement uncertainty</subject><subject>Neyman-Pearson test</subject><subject>Privacy</subject><subject>privacy-enhancing technology</subject><subject>Smart meters</subject><subject>Testing</subject><issn>1556-6013</issn><issn>1556-6021</issn><issn>1556-6021</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNo9kMFOwzAQRC0EEqXwAYhLJM4Ju7aTOMcIKK1UCSQKV8tOnNYFmmKnRf17HKXqaffwZnZ2CLlFSBCheFjMJu8JBRQJFYIyzs7ICNM0izOgeH7akV2SK-_XAJxjJkYkeXN2r6pDVC6V3fguUtH0sG27lfHWRwvjO7tZRmW9N84rd7gmF4369ubmOMfkY_K8eJzG89eX2WM5jyuW8y5OdU2xUYwxqmtoINNNnvMKlejDgkmVAcNRUwUM87QodFWD0FAwjVylwMYkHnz9n9nutNw6-xPOy1ZZ-WQ_S9m6pfzqVpLyPC9E4O8Hfuva311ILdftzm1CRElRiPA4IgsUDlTlWu-daU6-CLJPJvsaZV-jPNYYNHeDxhpjTrxIORTB8h_fNWyR</recordid><startdate>20190601</startdate><enddate>20190601</enddate><creator>Zuxing Li</creator><creator>Oechtering, Tobias J.</creator><creator>Gunduz, Deniz</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>ADTPV</scope><scope>AFDQA</scope><scope>AOWAS</scope><scope>D8T</scope><scope>D8V</scope><scope>ZZAVC</scope></search><sort><creationdate>20190601</creationdate><title>Privacy Against a Hypothesis Testing Adversary</title><author>Zuxing Li ; Oechtering, Tobias J. ; Gunduz, Deniz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Bayes methods</topic><topic>Bayesian analysis</topic><topic>Bayesian test</topic><topic>Codes</topic><topic>Data privacy</topic><topic>Distortion</topic><topic>Divergence</topic><topic>Energy consumption</topic><topic>Error analysis</topic><topic>Hypotheses</topic><topic>Hypothesis testing</topic><topic>information theory</topic><topic>large deviations</topic><topic>Measurement uncertainty</topic><topic>Neyman-Pearson test</topic><topic>Privacy</topic><topic>privacy-enhancing technology</topic><topic>Smart meters</topic><topic>Testing</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zuxing Li</creatorcontrib><creatorcontrib>Oechtering, Tobias J.</creatorcontrib><creatorcontrib>Gunduz, Deniz</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>SwePub</collection><collection>SWEPUB Kungliga Tekniska Högskolan full text</collection><collection>SwePub Articles</collection><collection>SWEPUB Freely available online</collection><collection>SWEPUB Kungliga Tekniska Högskolan</collection><collection>SwePub Articles full text</collection><jtitle>IEEE transactions on information forensics and security</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zuxing Li</au><au>Oechtering, Tobias J.</au><au>Gunduz, Deniz</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Privacy Against a Hypothesis Testing Adversary</atitle><jtitle>IEEE transactions on information forensics and security</jtitle><stitle>TIFS</stitle><date>2019-06-01</date><risdate>2019</risdate><volume>14</volume><issue>6</issue><spage>1567</spage><epage>1581</epage><pages>1567-1581</pages><issn>1556-6013</issn><issn>1556-6021</issn><eissn>1556-6021</eissn><coden>ITIFA6</coden><abstract>Privacy against an adversary (AD) that tries to detect the underlying privacy-sensitive data distribution is studied. The original data sequence is assumed to come from one of the two known distributions, and the privacy leakage is measured by the probability of error of the binary hypothesis test carried out by the AD. A management unit (MU) is allowed to manipulate the original data sequence in an online fashion while satisfying an average distortion constraint. The goal of the MU is to maximize the minimal type II probability of error subject to a constraint on the type I probability of error assuming an adversarial Neyman-Pearson test, or to maximize the minimal error probability assuming an adversarial Bayesian test. The asymptotic exponents of the maximum minimal type II probability of error and the maximum minimal error probability are shown to be characterized by a Kullback-Leibler divergence rate and a Chernoff information rate, respectively. Privacy performances of particular management policies, the memoryless hypothesis-aware policy and the hypothesis-unaware policy with memory, are compared. The proposed formulation can also model adversarial example generation with minimal data manipulation to fool classifiers. At last, the results are applied to a smart meter privacy problem, where the user's energy consumption is manipulated by adaptively using a renewable energy source in order to hide user's activity from the energy provider.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIFS.2018.2882343</doi><tpages>15</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1556-6013 |
ispartof | IEEE transactions on information forensics and security, 2019-06, Vol.14 (6), p.1567-1581 |
issn | 1556-6013 1556-6021 1556-6021 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TIFS_2018_2882343 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Bayes methods Bayesian analysis Bayesian test Codes Data privacy Distortion Divergence Energy consumption Error analysis Hypotheses Hypothesis testing information theory large deviations Measurement uncertainty Neyman-Pearson test Privacy privacy-enhancing technology Smart meters Testing |
title | Privacy Against a Hypothesis Testing Adversary |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T03%3A25%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Privacy%20Against%20a%20Hypothesis%20Testing%20Adversary&rft.jtitle=IEEE%20transactions%20on%20information%20forensics%20and%20security&rft.au=Zuxing%20Li&rft.date=2019-06-01&rft.volume=14&rft.issue=6&rft.spage=1567&rft.epage=1581&rft.pages=1567-1581&rft.issn=1556-6013&rft.eissn=1556-6021&rft.coden=ITIFA6&rft_id=info:doi/10.1109/TIFS.2018.2882343&rft_dat=%3Cproquest_cross%3E2188601113%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c374t-5bd21fa3332bd0f06bf774c1a811090e5ae0e41b2a0317599bcd08b093b14a503%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2188601113&rft_id=info:pmid/&rft_ieee_id=8540911&rfr_iscdi=true |