Loading…
Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text
Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the l...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c249t-45f9152d821dadb51bc5774c73241bf42986d1dca41728430414504bb11966953 |
---|---|
cites | |
container_end_page | 219 |
container_issue | |
container_start_page | 210 |
container_title | |
container_volume | |
creator | Feyisetan, Oluwaseyi Diethe, Tom Drake, Thomas |
description | Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe > 20x greater guarantees on expected privacy against comparable worst case statistics. |
doi_str_mv | 10.1109/ICDM.2019.00031 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8970912</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8970912</ieee_id><sourcerecordid>8970912</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-45f9152d821dadb51bc5774c73241bf42986d1dca41728430414504bb11966953</originalsourceid><addsrcrecordid>eNotj1FLwzAUhaMguE2fffAlf6A1N7lNk0epug0qjrE9jzRJZ6R2Iy3F_ntX9Ol8HD4OHEIegKUATD-ti5f3lDPQKWNMwBWZQ84VoGQor8mMixwThUreknnXfV0UKQWbkW3pBx_NMbRHugoXivYzWNPQrT9H3_m2N304tR2tT5FupiYOk7uJYTB2pKZ1dN-HJvQjDS3d-Z_-jtzUpun8_X8uyP7tdVeskvJjuS6ey8Ry1H2CWa0h405xcMZVGVQ2y3O0ueAIVY1cK-nAWYPTERQMATOGVQWgpdSZWJDHv93gvT-cY_g2cTwonTMNXPwCCfVOGQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text</title><source>IEEE Xplore All Conference Series</source><creator>Feyisetan, Oluwaseyi ; Diethe, Tom ; Drake, Thomas</creator><creatorcontrib>Feyisetan, Oluwaseyi ; Diethe, Tom ; Drake, Thomas</creatorcontrib><description>Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe > 20x greater guarantees on expected privacy against comparable worst case statistics.</description><identifier>EISSN: 2374-8486</identifier><identifier>EISBN: 1728146046</identifier><identifier>EISBN: 9781728146041</identifier><identifier>DOI: 10.1109/ICDM.2019.00031</identifier><language>eng</language><publisher>IEEE</publisher><subject>data sanitization ; document redaction ; privacy ; privacy analysis</subject><ispartof>2019 IEEE International Conference on Data Mining (ICDM), 2019, p.210-219</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c249t-45f9152d821dadb51bc5774c73241bf42986d1dca41728430414504bb11966953</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8970912$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8970912$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Feyisetan, Oluwaseyi</creatorcontrib><creatorcontrib>Diethe, Tom</creatorcontrib><creatorcontrib>Drake, Thomas</creatorcontrib><title>Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text</title><title>2019 IEEE International Conference on Data Mining (ICDM)</title><addtitle>ICDM</addtitle><description>Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe > 20x greater guarantees on expected privacy against comparable worst case statistics.</description><subject>data sanitization</subject><subject>document redaction</subject><subject>privacy</subject><subject>privacy analysis</subject><issn>2374-8486</issn><isbn>1728146046</isbn><isbn>9781728146041</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj1FLwzAUhaMguE2fffAlf6A1N7lNk0epug0qjrE9jzRJZ6R2Iy3F_ntX9Ol8HD4OHEIegKUATD-ti5f3lDPQKWNMwBWZQ84VoGQor8mMixwThUreknnXfV0UKQWbkW3pBx_NMbRHugoXivYzWNPQrT9H3_m2N304tR2tT5FupiYOk7uJYTB2pKZ1dN-HJvQjDS3d-Z_-jtzUpun8_X8uyP7tdVeskvJjuS6ey8Ry1H2CWa0h405xcMZVGVQ2y3O0ueAIVY1cK-nAWYPTERQMATOGVQWgpdSZWJDHv93gvT-cY_g2cTwonTMNXPwCCfVOGQ</recordid><startdate>201911</startdate><enddate>201911</enddate><creator>Feyisetan, Oluwaseyi</creator><creator>Diethe, Tom</creator><creator>Drake, Thomas</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201911</creationdate><title>Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text</title><author>Feyisetan, Oluwaseyi ; Diethe, Tom ; Drake, Thomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-45f9152d821dadb51bc5774c73241bf42986d1dca41728430414504bb11966953</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>data sanitization</topic><topic>document redaction</topic><topic>privacy</topic><topic>privacy analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Feyisetan, Oluwaseyi</creatorcontrib><creatorcontrib>Diethe, Tom</creatorcontrib><creatorcontrib>Drake, Thomas</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Feyisetan, Oluwaseyi</au><au>Diethe, Tom</au><au>Drake, Thomas</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text</atitle><btitle>2019 IEEE International Conference on Data Mining (ICDM)</btitle><stitle>ICDM</stitle><date>2019-11</date><risdate>2019</risdate><spage>210</spage><epage>219</epage><pages>210-219</pages><eissn>2374-8486</eissn><eisbn>1728146046</eisbn><eisbn>9781728146041</eisbn><abstract>Guaranteeing a certain level of user privacy in an arbitrary piece of text is a challenging issue. However, with this challenge comes the potential of unlocking access to vast data stores for training machine learning models and supporting data driven decisions. We address this problem through the lens of dx-privacy, a generalization of Differential Privacy to non Hamming distance metrics. In this work, we explore word representations in Hyperbolic space as a means of preserving privacy in text. We provide a proof satisfying dx-privacy, then we define a probability distribution in Hyperbolic space and describe a way to sample from it in high dimensions. Privacy is provided by perturbing vector representations of words in high dimensional Hyperbolic space to obtain a semantic generalization. We conduct a series of experiments to demonstrate the tradeoff between privacy and utility. Our privacy experiments illustrate protections against an authorship attribution algorithm while our utility experiments highlight the minimal impact of our perturbations on several downstream machine learning models. Compared to the Euclidean baseline, we observe > 20x greater guarantees on expected privacy against comparable worst case statistics.</abstract><pub>IEEE</pub><doi>10.1109/ICDM.2019.00031</doi><tpages>10</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2374-8486 |
ispartof | 2019 IEEE International Conference on Data Mining (ICDM), 2019, p.210-219 |
issn | 2374-8486 |
language | eng |
recordid | cdi_ieee_primary_8970912 |
source | IEEE Xplore All Conference Series |
subjects | data sanitization document redaction privacy privacy analysis |
title | Leveraging Hierarchical Representations for Preserving Privacy and Utility in Text |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T08%3A39%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Leveraging%20Hierarchical%20Representations%20for%20Preserving%20Privacy%20and%20Utility%20in%20Text&rft.btitle=2019%20IEEE%20International%20Conference%20on%20Data%20Mining%20(ICDM)&rft.au=Feyisetan,%20Oluwaseyi&rft.date=2019-11&rft.spage=210&rft.epage=219&rft.pages=210-219&rft.eissn=2374-8486&rft_id=info:doi/10.1109/ICDM.2019.00031&rft.eisbn=1728146046&rft.eisbn_list=9781728146041&rft_dat=%3Cieee_CHZPO%3E8970912%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c249t-45f9152d821dadb51bc5774c73241bf42986d1dca41728430414504bb11966953%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8970912&rfr_iscdi=true |