Loading…

Evolutionary Algorithm-Based Images, Humanly Indistinguishable and Adversarial Against Convolutional Neural Networks: Efficiency and Filter Robustness

Convolutional neural networks (CNNs) have become one of the most important tools for image classification. However, many models are susceptible to adversarial attacks, and CNNs can perform misclassifications. In previous works, we successfully developed an EA-based black-box attack that creates adve...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2021, Vol.9, p.160758-160778
Main Authors: Chitic, Raluca, Topal, Ali Osman, Leprevost, Franck
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural networks (CNNs) have become one of the most important tools for image classification. However, many models are susceptible to adversarial attacks, and CNNs can perform misclassifications. In previous works, we successfully developed an EA-based black-box attack that creates adversarial images for the target scenario that fulfils two criteria. The CNN should classify the adversarial image in the target category with a confidence ≥ 0.95, and a human should not notice any difference between the adversarial and original images. Thanks to extensive experiments performed with the CNN {\mathcal{C}} = VGG-16 trained on the CIFAR-10 dataset to classify images according to 10 categories, this paper, which substantially enhances most aspects of Chitic et al. (2021), addresses four issues. (1) From a pure EA point of view, we highlight the conceptual originality of our algorithm \text {EA}_{d}^{\text {target}, {\mathcal{C}}} , versus the classical EA approach. The competitive advantage obtained was assessed experimentally during image classification. (2) We then measured the intrinsic performance of the EA-based attack for an extensive series of ancestor images. (3) We challenged the filter resistance of the adversarial images created by the EA for five well-known filters. (4) We proceed to the creation of natively filter-resistant adversarial images that can fool humans, CNNs, and CNNs composed with filters.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3131255