Loading…
A black-box reversible adversarial example for authorizable recognition to shared images
•A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial...
Saved in:
Published in: | Pattern recognition 2023-08, Vol.140, p.109549, Article 109549 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial Example (B-RAE) scheme is proposed to protect shared images, which not only generates adversarial examples efficiently, but also balances the visual quality and attack ability of adversarial examples more flexibly.•The PGN can generate adversarial examples with high robustness and transfer attack ability. The ensemble strategy is applied to strengthen the attack ability to different models. The robustness and the black-box attack ability of B-RAE provide a promising solution for practical applications.
Shared images on the Internet are easily collected, classified, and analyzed by unauthorized commercial companies through Deep Neural Networks (DNNs). The illegal use of these data damages the rights and interests of authorized companies and individuals. How to ensure that network-shared data is legally used by authorized users and not used by unauthorized DNNs has become an urgent problem. Reversible Adversarial Example (RAE) provides an effective solution, which can mislead the classification of unauthorized DNNs and does not affect the authorized users. The existing RAE schemes assumed that we could know the parameters of the target model and thus generate reversible adversarial examples. However, model parameters are often protected to avoid leakage, increasing the difficulty of generating accurate RAEs. In this paper, we first propose a Black-box Reversible Adversarial Example (B-RAE) scheme to generate robust reversible adversarial examples. We aim to protect image privacy while maintaining data usability in real scenarios. Experimental results and analysis have demonstrated that the proposed B-RAE is more effective and robust compared with the existing schemes. |
---|---|
ISSN: | 0031-3203 1873-5142 |
DOI: | 10.1016/j.patcog.2023.109549 |