Loading…
An Annealing Mechanism for Adversarial Training Acceleration
Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that can dramatically degrade their performance. These are known as adversarial attacks. To counter adversarial attacks, adversarial training formulated...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2023-02, Vol.34 (2), p.882-893 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that can dramatically degrade their performance. These are known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose an annealing mechanism, annealing mechanism for adversarial training acceleration (Amata), to reduce the overhead associated with adversarial training. The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory, and can be combined with existing acceleration methods to further enhance performance. It is demonstrated that, on standard datasets, Amata can achieve similar or better robustness with around 1/3-1/2 the computational time compared with traditional methods. In addition, Amata can be incorporated into other adversarial training acceleration algorithms (e.g., YOPO, Free, Fast, and ATTA), which leads to a further reduction in computational time on large-scale problems. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2021.3103528 |