Loading…

A black-box reversible adversarial example for authorizable recognition to shared images

•A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2023-08, Vol.140, p.109549, Article 109549
Main Authors: Xiong, Lizhi, Wu, Yue, Yu, Peipeng, Zheng, Yuhui
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3
cites cdi_FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3
container_end_page
container_issue
container_start_page 109549
container_title Pattern recognition
container_volume 140
creator Xiong, Lizhi
Wu, Yue
Yu, Peipeng
Zheng, Yuhui
description •A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial Example (B-RAE) scheme is proposed to protect shared images, which not only generates adversarial examples efficiently, but also balances the visual quality and attack ability of adversarial examples more flexibly.•The PGN can generate adversarial examples with high robustness and transfer attack ability. The ensemble strategy is applied to strengthen the attack ability to different models. The robustness and the black-box attack ability of B-RAE provide a promising solution for practical applications. Shared images on the Internet are easily collected, classified, and analyzed by unauthorized commercial companies through Deep Neural Networks (DNNs). The illegal use of these data damages the rights and interests of authorized companies and individuals. How to ensure that network-shared data is legally used by authorized users and not used by unauthorized DNNs has become an urgent problem. Reversible Adversarial Example (RAE) provides an effective solution, which can mislead the classification of unauthorized DNNs and does not affect the authorized users. The existing RAE schemes assumed that we could know the parameters of the target model and thus generate reversible adversarial examples. However, model parameters are often protected to avoid leakage, increasing the difficulty of generating accurate RAEs. In this paper, we first propose a Black-box Reversible Adversarial Example (B-RAE) scheme to generate robust reversible adversarial examples. We aim to protect image privacy while maintaining data usability in real scenarios. Experimental results and analysis have demonstrated that the proposed B-RAE is more effective and robust compared with the existing schemes.
doi_str_mv 10.1016/j.patcog.2023.109549
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_patcog_2023_109549</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0031320323002492</els_id><sourcerecordid>S0031320323002492</sourcerecordid><originalsourceid>FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3</originalsourceid><addsrcrecordid>eNp9UMtOwzAQtBBIlMIfcMgPpHhjJ2kuSFXFS6rEBSRu1sZety5pXdmhKnw9jsKZ065md2Znh7Fb4DPgUN1tZwfstV_PCl6IBDWlbM7YBOa1yEuQxTmbcC4gFwUXl-wqxi3nUKfBhH0ssrZD_Zm3_pQFOlKIru0oQzO0GBx2GZ1wd0iY9SHDr37jg_vBYSlQOrp3vfP7rPdZ3GAgk7kdrileswuLXaSbvzpl748Pb8vnfPX69LJcrHIteNXnKAGqudEIlioQZAtZc2FM3bS2NRaQA5CpSmGkMVwiWl0jGdu0BZAsSUyZHHV18DEGsuoQkoPwrYCrIR21VWM6akhHjekk2v1Io-Tt6CioqB3tNRmXvuqV8e5_gV9ATHJ-</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A black-box reversible adversarial example for authorizable recognition to shared images</title><source>ScienceDirect Journals</source><creator>Xiong, Lizhi ; Wu, Yue ; Yu, Peipeng ; Zheng, Yuhui</creator><creatorcontrib>Xiong, Lizhi ; Wu, Yue ; Yu, Peipeng ; Zheng, Yuhui</creatorcontrib><description>•A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial Example (B-RAE) scheme is proposed to protect shared images, which not only generates adversarial examples efficiently, but also balances the visual quality and attack ability of adversarial examples more flexibly.•The PGN can generate adversarial examples with high robustness and transfer attack ability. The ensemble strategy is applied to strengthen the attack ability to different models. The robustness and the black-box attack ability of B-RAE provide a promising solution for practical applications. Shared images on the Internet are easily collected, classified, and analyzed by unauthorized commercial companies through Deep Neural Networks (DNNs). The illegal use of these data damages the rights and interests of authorized companies and individuals. How to ensure that network-shared data is legally used by authorized users and not used by unauthorized DNNs has become an urgent problem. Reversible Adversarial Example (RAE) provides an effective solution, which can mislead the classification of unauthorized DNNs and does not affect the authorized users. The existing RAE schemes assumed that we could know the parameters of the target model and thus generate reversible adversarial examples. However, model parameters are often protected to avoid leakage, increasing the difficulty of generating accurate RAEs. In this paper, we first propose a Black-box Reversible Adversarial Example (B-RAE) scheme to generate robust reversible adversarial examples. We aim to protect image privacy while maintaining data usability in real scenarios. Experimental results and analysis have demonstrated that the proposed B-RAE is more effective and robust compared with the existing schemes.</description><identifier>ISSN: 0031-3203</identifier><identifier>EISSN: 1873-5142</identifier><identifier>DOI: 10.1016/j.patcog.2023.109549</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Prediction-error histogram ; Reversible adversarial example ; Reversible data hiding</subject><ispartof>Pattern recognition, 2023-08, Vol.140, p.109549, Article 109549</ispartof><rights>2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3</citedby><cites>FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3</cites><orcidid>0000-0001-8612-1222</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Xiong, Lizhi</creatorcontrib><creatorcontrib>Wu, Yue</creatorcontrib><creatorcontrib>Yu, Peipeng</creatorcontrib><creatorcontrib>Zheng, Yuhui</creatorcontrib><title>A black-box reversible adversarial example for authorizable recognition to shared images</title><title>Pattern recognition</title><description>•A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial Example (B-RAE) scheme is proposed to protect shared images, which not only generates adversarial examples efficiently, but also balances the visual quality and attack ability of adversarial examples more flexibly.•The PGN can generate adversarial examples with high robustness and transfer attack ability. The ensemble strategy is applied to strengthen the attack ability to different models. The robustness and the black-box attack ability of B-RAE provide a promising solution for practical applications. Shared images on the Internet are easily collected, classified, and analyzed by unauthorized commercial companies through Deep Neural Networks (DNNs). The illegal use of these data damages the rights and interests of authorized companies and individuals. How to ensure that network-shared data is legally used by authorized users and not used by unauthorized DNNs has become an urgent problem. Reversible Adversarial Example (RAE) provides an effective solution, which can mislead the classification of unauthorized DNNs and does not affect the authorized users. The existing RAE schemes assumed that we could know the parameters of the target model and thus generate reversible adversarial examples. However, model parameters are often protected to avoid leakage, increasing the difficulty of generating accurate RAEs. In this paper, we first propose a Black-box Reversible Adversarial Example (B-RAE) scheme to generate robust reversible adversarial examples. We aim to protect image privacy while maintaining data usability in real scenarios. Experimental results and analysis have demonstrated that the proposed B-RAE is more effective and robust compared with the existing schemes.</description><subject>Prediction-error histogram</subject><subject>Reversible adversarial example</subject><subject>Reversible data hiding</subject><issn>0031-3203</issn><issn>1873-5142</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9UMtOwzAQtBBIlMIfcMgPpHhjJ2kuSFXFS6rEBSRu1sZety5pXdmhKnw9jsKZ065md2Znh7Fb4DPgUN1tZwfstV_PCl6IBDWlbM7YBOa1yEuQxTmbcC4gFwUXl-wqxi3nUKfBhH0ssrZD_Zm3_pQFOlKIru0oQzO0GBx2GZ1wd0iY9SHDr37jg_vBYSlQOrp3vfP7rPdZ3GAgk7kdrileswuLXaSbvzpl748Pb8vnfPX69LJcrHIteNXnKAGqudEIlioQZAtZc2FM3bS2NRaQA5CpSmGkMVwiWl0jGdu0BZAsSUyZHHV18DEGsuoQkoPwrYCrIR21VWM6akhHjekk2v1Io-Tt6CioqB3tNRmXvuqV8e5_gV9ATHJ-</recordid><startdate>202308</startdate><enddate>202308</enddate><creator>Xiong, Lizhi</creator><creator>Wu, Yue</creator><creator>Yu, Peipeng</creator><creator>Zheng, Yuhui</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-8612-1222</orcidid></search><sort><creationdate>202308</creationdate><title>A black-box reversible adversarial example for authorizable recognition to shared images</title><author>Xiong, Lizhi ; Wu, Yue ; Yu, Peipeng ; Zheng, Yuhui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Prediction-error histogram</topic><topic>Reversible adversarial example</topic><topic>Reversible data hiding</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xiong, Lizhi</creatorcontrib><creatorcontrib>Wu, Yue</creatorcontrib><creatorcontrib>Yu, Peipeng</creatorcontrib><creatorcontrib>Zheng, Yuhui</creatorcontrib><collection>CrossRef</collection><jtitle>Pattern recognition</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiong, Lizhi</au><au>Wu, Yue</au><au>Yu, Peipeng</au><au>Zheng, Yuhui</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A black-box reversible adversarial example for authorizable recognition to shared images</atitle><jtitle>Pattern recognition</jtitle><date>2023-08</date><risdate>2023</risdate><volume>140</volume><spage>109549</spage><pages>109549-</pages><artnum>109549</artnum><issn>0031-3203</issn><eissn>1873-5142</eissn><abstract>•A Perturbation Generative Network (PGN) is proposed to generate Adversarial Examples (AEs) under black-box scenarios. The discriminator is employed to enhance the fidelity. The generated adversarial noises are further compressed by a designed compression strategy.•A Black-box Reversible Adversarial Example (B-RAE) scheme is proposed to protect shared images, which not only generates adversarial examples efficiently, but also balances the visual quality and attack ability of adversarial examples more flexibly.•The PGN can generate adversarial examples with high robustness and transfer attack ability. The ensemble strategy is applied to strengthen the attack ability to different models. The robustness and the black-box attack ability of B-RAE provide a promising solution for practical applications. Shared images on the Internet are easily collected, classified, and analyzed by unauthorized commercial companies through Deep Neural Networks (DNNs). The illegal use of these data damages the rights and interests of authorized companies and individuals. How to ensure that network-shared data is legally used by authorized users and not used by unauthorized DNNs has become an urgent problem. Reversible Adversarial Example (RAE) provides an effective solution, which can mislead the classification of unauthorized DNNs and does not affect the authorized users. The existing RAE schemes assumed that we could know the parameters of the target model and thus generate reversible adversarial examples. However, model parameters are often protected to avoid leakage, increasing the difficulty of generating accurate RAEs. In this paper, we first propose a Black-box Reversible Adversarial Example (B-RAE) scheme to generate robust reversible adversarial examples. We aim to protect image privacy while maintaining data usability in real scenarios. Experimental results and analysis have demonstrated that the proposed B-RAE is more effective and robust compared with the existing schemes.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.patcog.2023.109549</doi><orcidid>https://orcid.org/0000-0001-8612-1222</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0031-3203
ispartof Pattern recognition, 2023-08, Vol.140, p.109549, Article 109549
issn 0031-3203
1873-5142
language eng
recordid cdi_crossref_primary_10_1016_j_patcog_2023_109549
source ScienceDirect Journals
subjects Prediction-error histogram
Reversible adversarial example
Reversible data hiding
title A black-box reversible adversarial example for authorizable recognition to shared images
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T21%3A53%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20black-box%20reversible%20adversarial%20example%20for%20authorizable%20recognition%20to%20shared%20images&rft.jtitle=Pattern%20recognition&rft.au=Xiong,%20Lizhi&rft.date=2023-08&rft.volume=140&rft.spage=109549&rft.pages=109549-&rft.artnum=109549&rft.issn=0031-3203&rft.eissn=1873-5142&rft_id=info:doi/10.1016/j.patcog.2023.109549&rft_dat=%3Celsevier_cross%3ES0031320323002492%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c306t-a41168dca1fe613ef24703dd79bfbdf1a011ed653d4dd04aafc7aedf9b21e45e3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true