Loading…

Vulnerable point detection and repair against adversarial attacks for convolutional neural networks

Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial exa...

Full description

Saved in:
Bibliographic Details
Published in:International journal of machine learning and cybernetics 2023-12, Vol.14 (12), p.4163-4192
Main Authors: Gao, Jie, Xia, Zhaoqiang, Dai, Jing, Dang, Chen, Jiang, Xiaoyue, Feng, Xiaoyi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713
cites cdi_FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713
container_end_page 4192
container_issue 12
container_start_page 4163
container_title International journal of machine learning and cybernetics
container_volume 14
creator Gao, Jie
Xia, Zhaoqiang
Dai, Jing
Dang, Chen
Jiang, Xiaoyue
Feng, Xiaoyi
description Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.
doi_str_mv 10.1007/s13042-023-01888-5
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2919444877</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2919444877</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOIzzB1wFXFeTNm2SpQy-YMCNirtwm8fQmZrUJB3x39txRHfezblwzzlcPoTOKbmkhPCrRCvCyoKUVUGoEKKoj9CMikYUgojX49-d01O0SGlDpmlIVZFyhvTL2Hsboe0tHkLnMzY2W5274DF4g6MdoIsY1tD5lDGYnY0JYgc9hpxBbxN2IWId_C704z42Xbwd47fkjxC36QydOOiTXfzoHD3f3jwt74vV493D8npV6IrKXLS8bYhw1hhNmXGNBGCSO8KB1tYw0NqaRrRC1k4ywhpXkro2UhtXCsc5rebo4tA7xPA-2pTVJoxxeiipUlLJGBOcT67y4NIxpBStU0Ps3iB-KkrUnqc68FQTT_XNU9VTqDqE0mT2axv_qv9JfQG2jnr4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2919444877</pqid></control><display><type>article</type><title>Vulnerable point detection and repair against adversarial attacks for convolutional neural networks</title><source>Springer Nature</source><creator>Gao, Jie ; Xia, Zhaoqiang ; Dai, Jing ; Dang, Chen ; Jiang, Xiaoyue ; Feng, Xiaoyi</creator><creatorcontrib>Gao, Jie ; Xia, Zhaoqiang ; Dai, Jing ; Dang, Chen ; Jiang, Xiaoyue ; Feng, Xiaoyi</creatorcontrib><description>Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.</description><identifier>ISSN: 1868-8071</identifier><identifier>EISSN: 1868-808X</identifier><identifier>DOI: 10.1007/s13042-023-01888-5</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Algorithms ; Artificial Intelligence ; Artificial neural networks ; Classification ; Complex Systems ; Computational Intelligence ; Control ; Engineering ; Image classification ; Image segmentation ; Mechatronics ; Methods ; Neural networks ; Object recognition ; Original Article ; Pattern Recognition ; Robotics ; Robustness ; Semantic segmentation ; Systems Biology</subject><ispartof>International journal of machine learning and cybernetics, 2023-12, Vol.14 (12), p.4163-4192</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713</citedby><cites>FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713</cites><orcidid>0000-0002-0428-6224</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Gao, Jie</creatorcontrib><creatorcontrib>Xia, Zhaoqiang</creatorcontrib><creatorcontrib>Dai, Jing</creatorcontrib><creatorcontrib>Dang, Chen</creatorcontrib><creatorcontrib>Jiang, Xiaoyue</creatorcontrib><creatorcontrib>Feng, Xiaoyi</creatorcontrib><title>Vulnerable point detection and repair against adversarial attacks for convolutional neural networks</title><title>International journal of machine learning and cybernetics</title><addtitle>Int. J. Mach. Learn. &amp; Cyber</addtitle><description>Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Complex Systems</subject><subject>Computational Intelligence</subject><subject>Control</subject><subject>Engineering</subject><subject>Image classification</subject><subject>Image segmentation</subject><subject>Mechatronics</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Original Article</subject><subject>Pattern Recognition</subject><subject>Robotics</subject><subject>Robustness</subject><subject>Semantic segmentation</subject><subject>Systems Biology</subject><issn>1868-8071</issn><issn>1868-808X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLxDAUhYMoOIzzB1wFXFeTNm2SpQy-YMCNirtwm8fQmZrUJB3x39txRHfezblwzzlcPoTOKbmkhPCrRCvCyoKUVUGoEKKoj9CMikYUgojX49-d01O0SGlDpmlIVZFyhvTL2Hsboe0tHkLnMzY2W5274DF4g6MdoIsY1tD5lDGYnY0JYgc9hpxBbxN2IWId_C704z42Xbwd47fkjxC36QydOOiTXfzoHD3f3jwt74vV493D8npV6IrKXLS8bYhw1hhNmXGNBGCSO8KB1tYw0NqaRrRC1k4ywhpXkro2UhtXCsc5rebo4tA7xPA-2pTVJoxxeiipUlLJGBOcT67y4NIxpBStU0Ps3iB-KkrUnqc68FQTT_XNU9VTqDqE0mT2axv_qv9JfQG2jnr4</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Gao, Jie</creator><creator>Xia, Zhaoqiang</creator><creator>Dai, Jing</creator><creator>Dang, Chen</creator><creator>Jiang, Xiaoyue</creator><creator>Feng, Xiaoyi</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0002-0428-6224</orcidid></search><sort><creationdate>20231201</creationdate><title>Vulnerable point detection and repair against adversarial attacks for convolutional neural networks</title><author>Gao, Jie ; Xia, Zhaoqiang ; Dai, Jing ; Dang, Chen ; Jiang, Xiaoyue ; Feng, Xiaoyi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Complex Systems</topic><topic>Computational Intelligence</topic><topic>Control</topic><topic>Engineering</topic><topic>Image classification</topic><topic>Image segmentation</topic><topic>Mechatronics</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Original Article</topic><topic>Pattern Recognition</topic><topic>Robotics</topic><topic>Robustness</topic><topic>Semantic segmentation</topic><topic>Systems Biology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gao, Jie</creatorcontrib><creatorcontrib>Xia, Zhaoqiang</creatorcontrib><creatorcontrib>Dai, Jing</creatorcontrib><creatorcontrib>Dang, Chen</creatorcontrib><creatorcontrib>Jiang, Xiaoyue</creatorcontrib><creatorcontrib>Feng, Xiaoyi</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer science database</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><jtitle>International journal of machine learning and cybernetics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gao, Jie</au><au>Xia, Zhaoqiang</au><au>Dai, Jing</au><au>Dang, Chen</au><au>Jiang, Xiaoyue</au><au>Feng, Xiaoyi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Vulnerable point detection and repair against adversarial attacks for convolutional neural networks</atitle><jtitle>International journal of machine learning and cybernetics</jtitle><stitle>Int. J. Mach. Learn. &amp; Cyber</stitle><date>2023-12-01</date><risdate>2023</risdate><volume>14</volume><issue>12</issue><spage>4163</spage><epage>4192</epage><pages>4163-4192</pages><issn>1868-8071</issn><eissn>1868-808X</eissn><abstract>Recently, convolutional neural networks have been shown to be sensitive to artificially designed perturbations that are imperceptible to the naked eye. Whether it is image classification, semantic segmentation, or object detection, all of them will face such problem. The existence of adversarial examples raises questions about the security of smart applications. Some works have paid attention to this problem and proposed several defensive strategies to resist adversarial attacks. However, no one explored the vulnerable area of the model under multiple attacks. In this work, we fill this gap by exploring the vulnerable areas of the model, which is vulnerable to adversarial attacks. Specifically, under various attack methods with different strengths, we conduct extensive experiments on two datasets based on three different networks and illustrate some phenomena. Besides, by exploiting the Siamese Network, we propose a novel approach to more intuitively discover the deficiencies of the model. Moreover, we further provide a novel adaptive vulnerable point repair method to improve the adversarial robustness of the model. Extensive experimental results show that our proposed method can effectively improve the adversarial robustness of the model.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s13042-023-01888-5</doi><tpages>30</tpages><orcidid>https://orcid.org/0000-0002-0428-6224</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1868-8071
ispartof International journal of machine learning and cybernetics, 2023-12, Vol.14 (12), p.4163-4192
issn 1868-8071
1868-808X
language eng
recordid cdi_proquest_journals_2919444877
source Springer Nature
subjects Algorithms
Artificial Intelligence
Artificial neural networks
Classification
Complex Systems
Computational Intelligence
Control
Engineering
Image classification
Image segmentation
Mechatronics
Methods
Neural networks
Object recognition
Original Article
Pattern Recognition
Robotics
Robustness
Semantic segmentation
Systems Biology
title Vulnerable point detection and repair against adversarial attacks for convolutional neural networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T02%3A24%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Vulnerable%20point%20detection%20and%20repair%20against%20adversarial%20attacks%20for%20convolutional%20neural%20networks&rft.jtitle=International%20journal%20of%20machine%20learning%20and%20cybernetics&rft.au=Gao,%20Jie&rft.date=2023-12-01&rft.volume=14&rft.issue=12&rft.spage=4163&rft.epage=4192&rft.pages=4163-4192&rft.issn=1868-8071&rft.eissn=1868-808X&rft_id=info:doi/10.1007/s13042-023-01888-5&rft_dat=%3Cproquest_cross%3E2919444877%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-b7b608feddc14df69aa497f07a15ed4acced68b895f94046f2055d9cdf28f7713%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2919444877&rft_id=info:pmid/&rfr_iscdi=true