Loading…

Enhancing adversarial defense for medical image analysis systems with pruning and attention mechanism

Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployme...

Full description

Saved in:
Bibliographic Details
Published in:Medical physics (Lancaster) 2021-10, Vol.48 (10), p.6198-6212
Main Authors: Chen, Lun, Zhao, Lu, Chen, Calvin Yu‐Chian
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Purpose Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks (DNNs) are susceptible to small adversarial perturbations in the image, which raise safety concerns about the deployment of these systems in clinical settings. Methods To improve the defense of the medical imaging system against adversarial examples, we propose a new model‐based defense framework for medical image DNNs model equipped with pruning and attention mechanism module based on the analysis of the reason why existing medical image DNNs models are vulnerable to attacks from adversarial examples is that complex biological texture of medical imaging and overparameterized medical image DNNs model. Results Three benchmark medical image datasets have verified the effectiveness of our method in improving the robustness of medical image DNNs models. In the chest X‐ray datasets, our defending method can even achieve up 77.18% defense rate for projected gradient descent attack and 69.49% defense rate for DeepFool attack. And through ablation experiments on the pruning module and the attention mechanism module, it is verified that the use of pruning and attention mechanism can effectively improve the robustness of the medical image DNNs model. Conclusions Compared with the existing model‐based defense methods proposed for natural images, our defense method is more suitable for medical images. Our method can be a general strategy to approach the design of more explainable and secure medical deep learning systems, and can be widely used in various medical image tasks to improve the robustness of medical models.
ISSN:0094-2405
2473-4209
DOI:10.1002/mp.15208