Loading…
Constrained Obfuscation to Thwart Pattern Matching Attacks
Recently, we have proposed a model-free privacy-preserving mechanism (PPM) against attacks that compromise user privacy by matching patterns in data sequences to those that are unique to a given user [1]. Because the PPM is model-free, there are no requirements on the statistical model for the data,...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Recently, we have proposed a model-free privacy-preserving mechanism (PPM) against attacks that compromise user privacy by matching patterns in data sequences to those that are unique to a given user [1]. Because the PPM is model-free, there are no requirements on the statistical model for the data, which is desirable when the model is not perfectly known. However, the proposed PPM did not enforce any constraints on the value to which a data point might be obfuscated, hence allowing an unlikely pattern that would make it easy for the adversary to detect which values have been obfuscated. In this paper, we consider a constrained PPM that enforces a continuity constraint so as to avoid abrupt jumps in the obfuscated data. To design such, we employ a graph-based analytical framework and the concept of consecutive patterns. At each point, the obfuscated data should be chosen strictly from that point's neighbors. Unfortunately, this might undesirably increase the noise level employed in data obfuscation and hence unacceptably reduce utility. We propose a new obfuscation algorithm, namely the obfuscation-return algorithm, and characterize its privacy guarantees under continuity and noise level constraints. |
---|---|
ISSN: | 2157-8117 |
DOI: | 10.1109/ISIT50566.2022.9834792 |