Loading…
Grasping Adversarial Attacks on Deep Convolutional Neural Networks for Cholangiocarcinoma Classification
Researchers have proposed plenty of novel deep convolutional neural networks (CNNs) architectures to achieve state-of-the-art performance in computer vision benchmarking datasets. Nevertheless, deep CNNs are still vulnerable to attacks such as adversarial attacks, which can effectively fool them. As...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Researchers have proposed plenty of novel deep convolutional neural networks (CNNs) architectures to achieve state-of-the-art performance in computer vision benchmarking datasets. Nevertheless, deep CNNs are still vulnerable to attacks such as adversarial attacks, which can effectively fool them. As a consequence, this attack will hinder the deployed model during production. Moreover, this fact causes a new obstacle towards the development of the future deep CNNs architecture, especially in some crucial fields, for instance, healthcare that uses deep learning to assist physicians in detecting disease from medical images. Since there are many novel adversarial attacks methods out there, this work evaluates the attack efficacy of various adversarial attacks methods against the model performance, specifically on the cholangiocarcinoma classification task. Intriguingly, from our experiment using several CNNs models, EfficientNet-BO gives the highest averaged accuracy after being attacked by some adversarial attack methods. |
---|---|
ISSN: | 2575-5145 |
DOI: | 10.1109/EHB52898.2021.9657589 |