Loading…

Hashing Fake: Producing Adversarial Perturbation for Online Privacy Protection Against Automatic Retrieval Models

The wide application of deep neural networks (DNNs) has significantly improved the performance of hashing models on multimodal retrieval issues. DNN-based deep models can automatically learn semantic features from raw data to make human-level decisions. However, the superior generalization leads to...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on computational social systems 2023-12, Vol.10 (6), p.1-11
Main Authors: Zhang, Xingwei, Zheng, Xiaolong, Mao, Wenji, Zeng, Daniel Dajun, Wang, Fei-Yue
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The wide application of deep neural networks (DNNs) has significantly improved the performance of hashing models on multimodal retrieval issues. DNN-based deep models can automatically learn semantic features from raw data to make human-level decisions. However, the superior generalization leads to potential privacy leakage risks. Strong DNN-based retrieval models enable malicious crawlers to search for nontag private information based on semantic similarity matching. Hence, executing effective privacy protection mechanisms against those retrieval software is essential for reliable social website construction. In this article, we propose a retrieval task-based adversarial perturbation generation method called Hashing Fake to meet this request. Specifically, DNNs are recently found to be vulnerable to a specific set of attacks called adversarial perturbations, which denote some magnitude-restricted signals added on objective samples to misguide well-crafted DNN models, and perturbations' magnitudes are small enough that will not induce humans' perception. Moreover, since existing adversarial perturbation generation methods are designed for supervised tasks, Hashing Fake constructs a differential approximation substitution for perturbation production on unsupervised retrieval tasks. Through extensive experiments on several deep retrieval benchmarks, we demonstrate that well-crafted perturbations using Hashing Fake can effectively misguide objective models' recognitions to make false predictions. The added norm-restricted perturbations on objective samples will not alter humans' perception; hence, Hashing Fake can be applied on real-world social websites to protect subscribers' privacy against malicious retrieval software.
ISSN:2329-924X
2329-924X
2373-7476
DOI:10.1109/TCSS.2022.3204120