Loading…
Pseudo unlearning via sample swapping with hash
Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privac...
Saved in:
Published in: | Information sciences 2024-03, Vol.662, p.120135, Article 120135 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Machine Unlearning is a recently proposed paradigm to make an machine learning (ML) model delete specific data. Specifically, the data owner has the right to ask the machine learning as a service (MLaaS) provider to remove the impact of specific data on the trained model, so as to protect the privacy of the forgotten data. However, in order to achieve the desired effect, the unlearning operation does come at a certain cost. Therefore, the dishonest MLaaS provider has an incentive to create fake forgetting feedback to deceive the data owner's unlearning verification without performing any unlearning operations. The primary objective of the paper is to understand the potential vulnerabilities within machine unlearning mechanisms and contribute to the development of more robust, trustworthy, and secure unlearning solutions. We propose the concept of Pseudo Unlearning for the first time, and designs an efficient scheme, sample swapping with hash (SSH), for the membership inference attack verification mechanism. We conduct extensive experiments on different datasets, and the performance achieved in the evaluation metrics as well as the verification mechanisms of membership inference attack shows the feasibility of pseudo unlearning. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2024.120135 |