Loading…

Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval

Deep product quantization networks (DPQNs) have been successfully used in image retrieval tasks, due to their powerful feature extraction ability and high efficiency of encoding high-dimensional visual features. Recent studies show that deep neural networks (DNNs) are vulnerable to input with small...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on pattern analysis and machine intelligence 2023-02, Vol.45 (2), p.1388-1404
Main Authors: Chen, Bin, Feng, Yan, Dai, Tao, Bai, Jiawang, Jiang, Yong, Xia, Shu-Tao, Wang, Xuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273
cites cdi_FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273
container_end_page 1404
container_issue 2
container_start_page 1388
container_title IEEE transactions on pattern analysis and machine intelligence
container_volume 45
creator Chen, Bin
Feng, Yan
Dai, Tao
Bai, Jiawang
Jiang, Yong
Xia, Shu-Tao
Wang, Xuan
description Deep product quantization networks (DPQNs) have been successfully used in image retrieval tasks, due to their powerful feature extraction ability and high efficiency of encoding high-dimensional visual features. Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations (a.k.a., adversarial examples) for classification. However, little effort has been devoted to investigating how adversarial examples affect DPQNs, which raises the potential safety hazard when deploying DPQNs in a commercial search engine. To this end, we propose an adversarial example generation framework by generating adversarial query images for DPQN-based retrieval systems. Unlike the adversarial generation for the classic image classification task that heavily relies on ground-truth labels, we alternatively perturb the probability distribution of centroids assignments for a clean query, then we can induce effective non-targeted attacks on DPQNs in white-box and black-box settings. Moreover, we further extend the non-targeted attack to a targeted attack by a novel sample space averaging scheme (\text{S}^{2} S2 AS), whose theoretical guarantee is also obtained. Extensive experiments show that our methods can create adversarial examples to successfully mislead the target DPQNs. Besides, we found that our methods both significantly degrade the retrieval performance under a wide variety of experimental settings. The source code is available at https://github.com/Kira0096/PQAG .
doi_str_mv 10.1109/TPAMI.2022.3165024
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_miscellaneous_2647653858</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9749881</ieee_id><sourcerecordid>2647653858</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273</originalsourceid><addsrcrecordid>eNpdkElPG0EQRltRUDCQP5BIaCQuXMb0vhwtAsYSu-CQU6vdUxONmcXpnjHLr6fBjg85VUnfq1LVQ-gHwWNCsDl5uJ1czcYUUzpmRApM-Rc0IoaZnAlmvqIRJpLmWlO9i_ZiXGBMuMDsG9pNucZGqBH6PSlWEKILlauzsxfXLGuI2RRaCK6vujYru5D9Alhmt6ErBt9nd4Nr--ptnV5D_9yFp5ilfta4P5DdQx8qWLn6AO2Uro7wfVP30eP52cPpRX55M52dTi5zzwTpc-AwV4wUTjhDxFxJU8y9J4xoqnhJCS-olopT70ECpGe8Moa7EmtWekEV20fH673L0P0dIPa2qaKHunYtdEO0VHIl079CJ_ToP3TRDaFN11mqJGFKMCwTRdeUD12MAUq7DFXjwqsl2H6It5_i7Yd4uxGfhg43q4d5A8V25J_pBPxcAxUAbGOjuNGasHd-8obU</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2761375306</pqid></control><display><type>article</type><title>Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval</title><source>IEEE Xplore (Online service)</source><creator>Chen, Bin ; Feng, Yan ; Dai, Tao ; Bai, Jiawang ; Jiang, Yong ; Xia, Shu-Tao ; Wang, Xuan</creator><creatorcontrib>Chen, Bin ; Feng, Yan ; Dai, Tao ; Bai, Jiawang ; Jiang, Yong ; Xia, Shu-Tao ; Wang, Xuan</creatorcontrib><description><![CDATA[Deep product quantization networks (DPQNs) have been successfully used in image retrieval tasks, due to their powerful feature extraction ability and high efficiency of encoding high-dimensional visual features. Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations (a.k.a., adversarial examples) for classification. However, little effort has been devoted to investigating how adversarial examples affect DPQNs, which raises the potential safety hazard when deploying DPQNs in a commercial search engine. To this end, we propose an adversarial example generation framework by generating adversarial query images for DPQN-based retrieval systems. Unlike the adversarial generation for the classic image classification task that heavily relies on ground-truth labels, we alternatively perturb the probability distribution of centroids assignments for a clean query, then we can induce effective non-targeted attacks on DPQNs in white-box and black-box settings. Moreover, we further extend the non-targeted attack to a targeted attack by a novel sample space averaging scheme (<inline-formula><tex-math notation="LaTeX">\text{S}^{2}</tex-math> <mml:math><mml:msup><mml:mtext>S</mml:mtext><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="chen-ieq1-3165024.gif"/> </inline-formula>AS), whose theoretical guarantee is also obtained. Extensive experiments show that our methods can create adversarial examples to successfully mislead the target DPQNs. Besides, we found that our methods both significantly degrade the retrieval performance under a wide variety of experimental settings. The source code is available at https://github.com/Kira0096/PQAG .]]></description><identifier>ISSN: 0162-8828</identifier><identifier>EISSN: 1939-3539</identifier><identifier>EISSN: 2160-9292</identifier><identifier>DOI: 10.1109/TPAMI.2022.3165024</identifier><identifier>PMID: 35380957</identifier><identifier>CODEN: ITPIDJ</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>adversarial attack ; Artificial neural networks ; Centroids ; deep learning ; Feature extraction ; Image classification ; Image retrieval ; KL divergence ; Measurement ; Perturbation ; Perturbation methods ; Probability distribution ; Product quantization ; Quantization (signal) ; Search engines ; Source code ; Task analysis</subject><ispartof>IEEE transactions on pattern analysis and machine intelligence, 2023-02, Vol.45 (2), p.1388-1404</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273</citedby><cites>FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273</cites><orcidid>0000-0002-4798-230X ; 0000-0002-4260-1395 ; 0000-0002-3512-0649 ; 0000-0003-0594-6404 ; 0000-0002-1472-6465 ; 0000-0002-8639-982X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9749881$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,777,781,27905,27906,54777</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35380957$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Bin</creatorcontrib><creatorcontrib>Feng, Yan</creatorcontrib><creatorcontrib>Dai, Tao</creatorcontrib><creatorcontrib>Bai, Jiawang</creatorcontrib><creatorcontrib>Jiang, Yong</creatorcontrib><creatorcontrib>Xia, Shu-Tao</creatorcontrib><creatorcontrib>Wang, Xuan</creatorcontrib><title>Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval</title><title>IEEE transactions on pattern analysis and machine intelligence</title><addtitle>TPAMI</addtitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><description><![CDATA[Deep product quantization networks (DPQNs) have been successfully used in image retrieval tasks, due to their powerful feature extraction ability and high efficiency of encoding high-dimensional visual features. Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations (a.k.a., adversarial examples) for classification. However, little effort has been devoted to investigating how adversarial examples affect DPQNs, which raises the potential safety hazard when deploying DPQNs in a commercial search engine. To this end, we propose an adversarial example generation framework by generating adversarial query images for DPQN-based retrieval systems. Unlike the adversarial generation for the classic image classification task that heavily relies on ground-truth labels, we alternatively perturb the probability distribution of centroids assignments for a clean query, then we can induce effective non-targeted attacks on DPQNs in white-box and black-box settings. Moreover, we further extend the non-targeted attack to a targeted attack by a novel sample space averaging scheme (<inline-formula><tex-math notation="LaTeX">\text{S}^{2}</tex-math> <mml:math><mml:msup><mml:mtext>S</mml:mtext><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="chen-ieq1-3165024.gif"/> </inline-formula>AS), whose theoretical guarantee is also obtained. Extensive experiments show that our methods can create adversarial examples to successfully mislead the target DPQNs. Besides, we found that our methods both significantly degrade the retrieval performance under a wide variety of experimental settings. The source code is available at https://github.com/Kira0096/PQAG .]]></description><subject>adversarial attack</subject><subject>Artificial neural networks</subject><subject>Centroids</subject><subject>deep learning</subject><subject>Feature extraction</subject><subject>Image classification</subject><subject>Image retrieval</subject><subject>KL divergence</subject><subject>Measurement</subject><subject>Perturbation</subject><subject>Perturbation methods</subject><subject>Probability distribution</subject><subject>Product quantization</subject><subject>Quantization (signal)</subject><subject>Search engines</subject><subject>Source code</subject><subject>Task analysis</subject><issn>0162-8828</issn><issn>1939-3539</issn><issn>2160-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpdkElPG0EQRltRUDCQP5BIaCQuXMb0vhwtAsYSu-CQU6vdUxONmcXpnjHLr6fBjg85VUnfq1LVQ-gHwWNCsDl5uJ1czcYUUzpmRApM-Rc0IoaZnAlmvqIRJpLmWlO9i_ZiXGBMuMDsG9pNucZGqBH6PSlWEKILlauzsxfXLGuI2RRaCK6vujYru5D9Alhmt6ErBt9nd4Nr--ptnV5D_9yFp5ilfta4P5DdQx8qWLn6AO2Uro7wfVP30eP52cPpRX55M52dTi5zzwTpc-AwV4wUTjhDxFxJU8y9J4xoqnhJCS-olopT70ECpGe8Moa7EmtWekEV20fH673L0P0dIPa2qaKHunYtdEO0VHIl079CJ_ToP3TRDaFN11mqJGFKMCwTRdeUD12MAUq7DFXjwqsl2H6It5_i7Yd4uxGfhg43q4d5A8V25J_pBPxcAxUAbGOjuNGasHd-8obU</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Chen, Bin</creator><creator>Feng, Yan</creator><creator>Dai, Tao</creator><creator>Bai, Jiawang</creator><creator>Jiang, Yong</creator><creator>Xia, Shu-Tao</creator><creator>Wang, Xuan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-4798-230X</orcidid><orcidid>https://orcid.org/0000-0002-4260-1395</orcidid><orcidid>https://orcid.org/0000-0002-3512-0649</orcidid><orcidid>https://orcid.org/0000-0003-0594-6404</orcidid><orcidid>https://orcid.org/0000-0002-1472-6465</orcidid><orcidid>https://orcid.org/0000-0002-8639-982X</orcidid></search><sort><creationdate>20230201</creationdate><title>Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval</title><author>Chen, Bin ; Feng, Yan ; Dai, Tao ; Bai, Jiawang ; Jiang, Yong ; Xia, Shu-Tao ; Wang, Xuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>adversarial attack</topic><topic>Artificial neural networks</topic><topic>Centroids</topic><topic>deep learning</topic><topic>Feature extraction</topic><topic>Image classification</topic><topic>Image retrieval</topic><topic>KL divergence</topic><topic>Measurement</topic><topic>Perturbation</topic><topic>Perturbation methods</topic><topic>Probability distribution</topic><topic>Product quantization</topic><topic>Quantization (signal)</topic><topic>Search engines</topic><topic>Source code</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Bin</creatorcontrib><creatorcontrib>Feng, Yan</creatorcontrib><creatorcontrib>Dai, Tao</creatorcontrib><creatorcontrib>Bai, Jiawang</creatorcontrib><creatorcontrib>Jiang, Yong</creatorcontrib><creatorcontrib>Xia, Shu-Tao</creatorcontrib><creatorcontrib>Wang, Xuan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore (Online service)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Bin</au><au>Feng, Yan</au><au>Dai, Tao</au><au>Bai, Jiawang</au><au>Jiang, Yong</au><au>Xia, Shu-Tao</au><au>Wang, Xuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval</atitle><jtitle>IEEE transactions on pattern analysis and machine intelligence</jtitle><stitle>TPAMI</stitle><addtitle>IEEE Trans Pattern Anal Mach Intell</addtitle><date>2023-02-01</date><risdate>2023</risdate><volume>45</volume><issue>2</issue><spage>1388</spage><epage>1404</epage><pages>1388-1404</pages><issn>0162-8828</issn><eissn>1939-3539</eissn><eissn>2160-9292</eissn><coden>ITPIDJ</coden><abstract><![CDATA[Deep product quantization networks (DPQNs) have been successfully used in image retrieval tasks, due to their powerful feature extraction ability and high efficiency of encoding high-dimensional visual features. Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations (a.k.a., adversarial examples) for classification. However, little effort has been devoted to investigating how adversarial examples affect DPQNs, which raises the potential safety hazard when deploying DPQNs in a commercial search engine. To this end, we propose an adversarial example generation framework by generating adversarial query images for DPQN-based retrieval systems. Unlike the adversarial generation for the classic image classification task that heavily relies on ground-truth labels, we alternatively perturb the probability distribution of centroids assignments for a clean query, then we can induce effective non-targeted attacks on DPQNs in white-box and black-box settings. Moreover, we further extend the non-targeted attack to a targeted attack by a novel sample space averaging scheme (<inline-formula><tex-math notation="LaTeX">\text{S}^{2}</tex-math> <mml:math><mml:msup><mml:mtext>S</mml:mtext><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="chen-ieq1-3165024.gif"/> </inline-formula>AS), whose theoretical guarantee is also obtained. Extensive experiments show that our methods can create adversarial examples to successfully mislead the target DPQNs. Besides, we found that our methods both significantly degrade the retrieval performance under a wide variety of experimental settings. The source code is available at https://github.com/Kira0096/PQAG .]]></abstract><cop>United States</cop><pub>IEEE</pub><pmid>35380957</pmid><doi>10.1109/TPAMI.2022.3165024</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0002-4798-230X</orcidid><orcidid>https://orcid.org/0000-0002-4260-1395</orcidid><orcidid>https://orcid.org/0000-0002-3512-0649</orcidid><orcidid>https://orcid.org/0000-0003-0594-6404</orcidid><orcidid>https://orcid.org/0000-0002-1472-6465</orcidid><orcidid>https://orcid.org/0000-0002-8639-982X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0162-8828
ispartof IEEE transactions on pattern analysis and machine intelligence, 2023-02, Vol.45 (2), p.1388-1404
issn 0162-8828
1939-3539
2160-9292
language eng
recordid cdi_proquest_miscellaneous_2647653858
source IEEE Xplore (Online service)
subjects adversarial attack
Artificial neural networks
Centroids
deep learning
Feature extraction
Image classification
Image retrieval
KL divergence
Measurement
Perturbation
Perturbation methods
Probability distribution
Product quantization
Quantization (signal)
Search engines
Source code
Task analysis
title Adversarial Examples Generation for Deep Product Quantization Networks on Image Retrieval
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T22%3A50%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adversarial%20Examples%20Generation%20for%20Deep%20Product%20Quantization%20Networks%20on%20Image%20Retrieval&rft.jtitle=IEEE%20transactions%20on%20pattern%20analysis%20and%20machine%20intelligence&rft.au=Chen,%20Bin&rft.date=2023-02-01&rft.volume=45&rft.issue=2&rft.spage=1388&rft.epage=1404&rft.pages=1388-1404&rft.issn=0162-8828&rft.eissn=1939-3539&rft.coden=ITPIDJ&rft_id=info:doi/10.1109/TPAMI.2022.3165024&rft_dat=%3Cproquest_ieee_%3E2647653858%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c351t-e4eb731da5a915b769dbcc1318274f214d286742cce6ee882c7994af083fc5273%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2761375306&rft_id=info:pmid/35380957&rft_ieee_id=9749881&rfr_iscdi=true