Loading…

Anatomical context protects deep learning from adversarial perturbations in medical imaging

Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medi...

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2020-02, Vol.379, p.370-378
Main Authors: Li, Yi, Zhang, Huahong, Bermudez, Camilo, Chen, Yifan, Landman, Bennett A., Vorobeychik, Yevgeniy
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning has achieved impressive performance across a variety of tasks, including medical image processing. However, recent research has shown that deep neural networks are susceptible to small adversarial perturbations in the image. We study the impact of such adversarial perturbations in medical image processing where the goal is to predict an individual’s age based on a 3D MRI brain image. We consider two models: a conventional deep neural network, and a hybrid deep learning model which additionally uses features informed by anatomical context. We find that we can introduce significant errors in predicted age by adding imperceptible noise to an image, can accomplish this even for large batches of images using a single perturbation, and that the hybrid model is much more robust to adversarial perturbations than the conventional deep neural network. Our work highlights limitations of current deep learning techniques in clinical applications, and suggests a path forward.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2019.10.085