Loading…

Pseudo unlearning via sample swapping with hash

Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privac...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences 2024-03, Vol.662, p.120135, Article 120135
Main Authors: Li, Lang, Ren, Xiaojun, Yan, Hongyang, Liu, Xiaozhang, Zhang, Zhenxin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c249t-87389c454c0479b05736f06ba3c17cf7dcc2674e66f3a7098205e90a160bfc8f3
container_end_page
container_issue
container_start_page 120135
container_title Information sciences
container_volume 662
creator Li, Lang
Ren, Xiaojun
Yan, Hongyang
Liu, Xiaozhang
Zhang, Zhenxin
description Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privacy of the forgotten data. However, in order to achieve the desired effect, the unlearning operation does come at a certain cost. Therefore, the dishonest MLaaS provider has an incentive to create fake forgetting feedback to deceive the data owner's unlearning verification without performing any unlearning operations. The primary objective of the paper is to understand the potential vulnerabilities within machine unlearning mechanisms and contribute to the development of more robust, trustworthy, and secure unlearning solutions. We propose the concept of Pseudo Unlearning for the first time, and designs an efficient scheme, sample swapping with hash (SSH), for the membership inference attack verification mechanism. We conduct extensive experiments on different datasets, and the performance achieved in the evaluation metrics as well as the verification mechanisms of membership inference attack shows the feasibility of pseudo unlearning.
doi_str_mv 10.1016/j.ins.2024.120135
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_ins_2024_120135</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0020025524000483</els_id><sourcerecordid>S0020025524000483</sourcerecordid><originalsourceid>FETCH-LOGICAL-c249t-87389c454c0479b05736f06ba3c17cf7dcc2674e66f3a7098205e90a160bfc8f3</originalsourceid><addsrcrecordid>eNp9j81KxDAUhYMoWEcfwF1foJ2b_xZXMvgHA7rQdUjTxKZ02pJ0ZvDtbalrVxcOfOfcD6F7DDkGLLZt7vuYEyAsxwQw5RcowYUkmSAlvkQJAIEMCOfX6CbGFgCYFCJB249oj_WQHvvO6tD7_js9eZ1GfRg7m8azHsclO_upSRsdm1t05XQX7d3f3aCv56fP3Wu2f3952z3uM0NYOWWFpEVpGGdm3ikr4JIKB6LS1GBpnKyNIUIyK4SjWkJZEOC2BI0FVM4Ujm4QXntNGGIM1qkx-IMOPwqDWoxVq2ZjtRir1XhmHlbGzo-dvA0qGm97Y2sfrJlUPfh_6F8zOF07</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Pseudo unlearning via sample swapping with hash</title><source>ScienceDirect Freedom Collection</source><creator>Li, Lang ; Ren, Xiaojun ; Yan, Hongyang ; Liu, Xiaozhang ; Zhang, Zhenxin</creator><creatorcontrib>Li, Lang ; Ren, Xiaojun ; Yan, Hongyang ; Liu, Xiaozhang ; Zhang, Zhenxin</creatorcontrib><description>Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privacy of the forgotten data. However, in order to achieve the desired effect, the unlearning operation does come at a certain cost. Therefore, the dishonest MLaaS provider has an incentive to create fake forgetting feedback to deceive the data owner's unlearning verification without performing any unlearning operations. The primary objective of the paper is to understand the potential vulnerabilities within machine unlearning mechanisms and contribute to the development of more robust, trustworthy, and secure unlearning solutions. We propose the concept of Pseudo Unlearning for the first time, and designs an efficient scheme, sample swapping with hash (SSH), for the membership inference attack verification mechanism. We conduct extensive experiments on different datasets, and the performance achieved in the evaluation metrics as well as the verification mechanisms of membership inference attack shows the feasibility of pseudo unlearning.</description><identifier>ISSN: 0020-0255</identifier><identifier>EISSN: 1872-6291</identifier><identifier>DOI: 10.1016/j.ins.2024.120135</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Machine unlearning ; Membership inference attack ; Unlearning verification</subject><ispartof>Information sciences, 2024-03, Vol.662, p.120135, Article 120135</ispartof><rights>2024 Elsevier Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c249t-87389c454c0479b05736f06ba3c17cf7dcc2674e66f3a7098205e90a160bfc8f3</cites><orcidid>0000-0003-4178-9442 ; 0000-0001-9858-0063 ; 0000-0003-2837-5157</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27922,27923</link.rule.ids></links><search><creatorcontrib>Li, Lang</creatorcontrib><creatorcontrib>Ren, Xiaojun</creatorcontrib><creatorcontrib>Yan, Hongyang</creatorcontrib><creatorcontrib>Liu, Xiaozhang</creatorcontrib><creatorcontrib>Zhang, Zhenxin</creatorcontrib><title>Pseudo unlearning via sample swapping with hash</title><title>Information sciences</title><description>Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privacy of the forgotten data. However, in order to achieve the desired effect, the unlearning operation does come at a certain cost. Therefore, the dishonest MLaaS provider has an incentive to create fake forgetting feedback to deceive the data owner's unlearning verification without performing any unlearning operations. The primary objective of the paper is to understand the potential vulnerabilities within machine unlearning mechanisms and contribute to the development of more robust, trustworthy, and secure unlearning solutions. We propose the concept of Pseudo Unlearning for the first time, and designs an efficient scheme, sample swapping with hash (SSH), for the membership inference attack verification mechanism. We conduct extensive experiments on different datasets, and the performance achieved in the evaluation metrics as well as the verification mechanisms of membership inference attack shows the feasibility of pseudo unlearning.</description><subject>Machine unlearning</subject><subject>Membership inference attack</subject><subject>Unlearning verification</subject><issn>0020-0255</issn><issn>1872-6291</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9j81KxDAUhYMoWEcfwF1foJ2b_xZXMvgHA7rQdUjTxKZ02pJ0ZvDtbalrVxcOfOfcD6F7DDkGLLZt7vuYEyAsxwQw5RcowYUkmSAlvkQJAIEMCOfX6CbGFgCYFCJB249oj_WQHvvO6tD7_js9eZ1GfRg7m8azHsclO_upSRsdm1t05XQX7d3f3aCv56fP3Wu2f3952z3uM0NYOWWFpEVpGGdm3ikr4JIKB6LS1GBpnKyNIUIyK4SjWkJZEOC2BI0FVM4Ujm4QXntNGGIM1qkx-IMOPwqDWoxVq2ZjtRir1XhmHlbGzo-dvA0qGm97Y2sfrJlUPfh_6F8zOF07</recordid><startdate>202403</startdate><enddate>202403</enddate><creator>Li, Lang</creator><creator>Ren, Xiaojun</creator><creator>Yan, Hongyang</creator><creator>Liu, Xiaozhang</creator><creator>Zhang, Zhenxin</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-4178-9442</orcidid><orcidid>https://orcid.org/0000-0001-9858-0063</orcidid><orcidid>https://orcid.org/0000-0003-2837-5157</orcidid></search><sort><creationdate>202403</creationdate><title>Pseudo unlearning via sample swapping with hash</title><author>Li, Lang ; Ren, Xiaojun ; Yan, Hongyang ; Liu, Xiaozhang ; Zhang, Zhenxin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c249t-87389c454c0479b05736f06ba3c17cf7dcc2674e66f3a7098205e90a160bfc8f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Machine unlearning</topic><topic>Membership inference attack</topic><topic>Unlearning verification</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Lang</creatorcontrib><creatorcontrib>Ren, Xiaojun</creatorcontrib><creatorcontrib>Yan, Hongyang</creatorcontrib><creatorcontrib>Liu, Xiaozhang</creatorcontrib><creatorcontrib>Zhang, Zhenxin</creatorcontrib><collection>CrossRef</collection><jtitle>Information sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Lang</au><au>Ren, Xiaojun</au><au>Yan, Hongyang</au><au>Liu, Xiaozhang</au><au>Zhang, Zhenxin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pseudo unlearning via sample swapping with hash</atitle><jtitle>Information sciences</jtitle><date>2024-03</date><risdate>2024</risdate><volume>662</volume><spage>120135</spage><pages>120135-</pages><artnum>120135</artnum><issn>0020-0255</issn><eissn>1872-6291</eissn><abstract>Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privacy of the forgotten data. However, in order to achieve the desired effect, the unlearning operation does come at a certain cost. Therefore, the dishonest MLaaS provider has an incentive to create fake forgetting feedback to deceive the data owner's unlearning verification without performing any unlearning operations. The primary objective of the paper is to understand the potential vulnerabilities within machine unlearning mechanisms and contribute to the development of more robust, trustworthy, and secure unlearning solutions. We propose the concept of Pseudo Unlearning for the first time, and designs an efficient scheme, sample swapping with hash (SSH), for the membership inference attack verification mechanism. We conduct extensive experiments on different datasets, and the performance achieved in the evaluation metrics as well as the verification mechanisms of membership inference attack shows the feasibility of pseudo unlearning.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.ins.2024.120135</doi><orcidid>https://orcid.org/0000-0003-4178-9442</orcidid><orcidid>https://orcid.org/0000-0001-9858-0063</orcidid><orcidid>https://orcid.org/0000-0003-2837-5157</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0020-0255
ispartof Information sciences, 2024-03, Vol.662, p.120135, Article 120135
issn 0020-0255
1872-6291
language eng
recordid cdi_crossref_primary_10_1016_j_ins_2024_120135
source ScienceDirect Freedom Collection
subjects Machine unlearning
Membership inference attack
Unlearning verification
title Pseudo unlearning via sample swapping with hash
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T13%3A53%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pseudo%20unlearning%20via%20sample%20swapping%20with%20hash&rft.jtitle=Information%20sciences&rft.au=Li,%20Lang&rft.date=2024-03&rft.volume=662&rft.spage=120135&rft.pages=120135-&rft.artnum=120135&rft.issn=0020-0255&rft.eissn=1872-6291&rft_id=info:doi/10.1016/j.ins.2024.120135&rft_dat=%3Celsevier_cross%3ES0020025524000483%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c249t-87389c454c0479b05736f06ba3c17cf7dcc2674e66f3a7098205e90a160bfc8f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true