Loading…

Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism

Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployme...

Full description

Saved in:
Bibliographic Details
Published in:Medical physics (Lancaster) 2021-10, Vol.48 (10), p.6198-6212
Main Authors: Chen, Lun, Zhao, Lu, Chen, Calvin Yu‐Chian
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233
cites cdi_FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233
container_end_page 6212
container_issue 10
container_start_page 6198
container_title Medical physics (Lancaster)
container_volume 48
creator Chen, Lun
Zhao, Lu
Chen, Calvin Yu‐Chian
description Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings. Methods To improve the defense of the medical imaging system against adversarial examples, we propose a new model‐based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model. Results Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X‐ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model. Conclusions Compared with the existing model‐based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.
doi_str_mv 10.1002/mp.15208
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2569618246</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2569618246</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233</originalsourceid><addsrcrecordid>eNp1kE1LxDAURYMoOI6CPyFLNx2TtE3apQzjB4zoQtchTV9mIm1a8zoO_ffWGcGVqwuP8w7cS8g1ZwvOmLht-wXPBStOyExkKk0ywcpTMmOszBKRsfycXCB-MMZkmrMZgVXYmmB92FBTf0FEE71paA0OAgJ1XaQt1N5ON9-aDVATTDOiR4ojDtAi3fthS_u4CwdHqKkZBgiD78L0aSe5x_aSnDnTIFz95py836_elo_J-uXhaXm3TqwoiyKxUClmVM5NJZ1VDkolC-PqSmVcicxaKIXjruBS1CkHCcpyluY1c1ClXKTpnNwcvX3sPneAg249WmgaE6DboRa5LCUvRCb_UBs7xAhO93FqGEfNmf5ZUre9Piw5ockR3fsGxn85_fx65L8B6YF2Dg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2569618246</pqid></control><display><type>article</type><title>Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism</title><source>Wiley</source><creator>Chen, Lun ; Zhao, Lu ; Chen, Calvin Yu‐Chian</creator><creatorcontrib>Chen, Lun ; Zhao, Lu ; Chen, Calvin Yu‐Chian</creatorcontrib><description>Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings. Methods To improve the defense of the medical imaging system against adversarial examples, we propose a new model‐based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model. Results Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X‐ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model. Conclusions Compared with the existing model‐based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.</description><identifier>ISSN: 0094-2405</identifier><identifier>EISSN: 2473-4209</identifier><identifier>DOI: 10.1002/mp.15208</identifier><language>eng</language><subject>adversarial examples ; attention mechanism ; defense ; medical image model ; prune</subject><ispartof>Medical physics (Lancaster), 2021-10, Vol.48 (10), p.6198-6212</ispartof><rights>2021 American Association of Physicists in Medicine</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233</citedby><cites>FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,777,781,27905,27906</link.rule.ids></links><search><creatorcontrib>Chen, Lun</creatorcontrib><creatorcontrib>Zhao, Lu</creatorcontrib><creatorcontrib>Chen, Calvin Yu‐Chian</creatorcontrib><title>Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism</title><title>Medical physics (Lancaster)</title><description>Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings. Methods To improve the defense of the medical imaging system against adversarial examples, we propose a new model‐based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model. Results Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X‐ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model. Conclusions Compared with the existing model‐based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.</description><subject>adversarial examples</subject><subject>attention mechanism</subject><subject>defense</subject><subject>medical image model</subject><subject>prune</subject><issn>0094-2405</issn><issn>2473-4209</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp1kE1LxDAURYMoOI6CPyFLNx2TtE3apQzjB4zoQtchTV9mIm1a8zoO_ffWGcGVqwuP8w7cS8g1ZwvOmLht-wXPBStOyExkKk0ywcpTMmOszBKRsfycXCB-MMZkmrMZgVXYmmB92FBTf0FEE71paA0OAgJ1XaQt1N5ON9-aDVATTDOiR4ojDtAi3fthS_u4CwdHqKkZBgiD78L0aSe5x_aSnDnTIFz95py836_elo_J-uXhaXm3TqwoiyKxUClmVM5NJZ1VDkolC-PqSmVcicxaKIXjruBS1CkHCcpyluY1c1ClXKTpnNwcvX3sPneAg249WmgaE6DboRa5LCUvRCb_UBs7xAhO93FqGEfNmf5ZUre9Piw5ockR3fsGxn85_fx65L8B6YF2Dg</recordid><startdate>202110</startdate><enddate>202110</enddate><creator>Chen, Lun</creator><creator>Zhao, Lu</creator><creator>Chen, Calvin Yu‐Chian</creator><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202110</creationdate><title>Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism</title><author>Chen, Lun ; Zhao, Lu ; Chen, Calvin Yu‐Chian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>adversarial examples</topic><topic>attention mechanism</topic><topic>defense</topic><topic>medical image model</topic><topic>prune</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Lun</creatorcontrib><creatorcontrib>Zhao, Lu</creatorcontrib><creatorcontrib>Chen, Calvin Yu‐Chian</creatorcontrib><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Medical physics (Lancaster)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Lun</au><au>Zhao, Lu</au><au>Chen, Calvin Yu‐Chian</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism</atitle><jtitle>Medical physics (Lancaster)</jtitle><date>2021-10</date><risdate>2021</risdate><volume>48</volume><issue>10</issue><spage>6198</spage><epage>6212</epage><pages>6198-6212</pages><issn>0094-2405</issn><eissn>2473-4209</eissn><abstract>Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings. Methods To improve the defense of the medical imaging system against adversarial examples, we propose a new model‐based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model. Results Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X‐ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model. Conclusions Compared with the existing model‐based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.</abstract><doi>10.1002/mp.15208</doi><tpages>15</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0094-2405
ispartof Medical physics (Lancaster), 2021-10, Vol.48 (10), p.6198-6212
issn 0094-2405
2473-4209
language eng
recordid cdi_proquest_miscellaneous_2569618246
source Wiley
subjects adversarial examples
attention mechanism
defense
medical image model
prune
title Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T15%3A35%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhancing%20adversarial%20defense%20for%20medical%20image%20analysis%20systems%20with%20pruning%20and%20attention%20mechanism&rft.jtitle=Medical%20physics%20(Lancaster)&rft.au=Chen,%20Lun&rft.date=2021-10&rft.volume=48&rft.issue=10&rft.spage=6198&rft.epage=6212&rft.pages=6198-6212&rft.issn=0094-2405&rft.eissn=2473-4209&rft_id=info:doi/10.1002/mp.15208&rft_dat=%3Cproquest_cross%3E2569618246%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c2988-ceb70a751ab6fc7fe9768afdb741724cce92f1f8162d31e6e7c1035d0feb31233%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2569618246&rft_id=info:pmid/&rfr_iscdi=true