Loading…
PatchCensor: Patch Robustness Certification for Transformers via Exhaustive Testing
In the past few years, Transformer has been widely adopted in many domains and applications because of its impressive performance. Vision Transformer (ViT), a successful and well-known variant, attracts considerable attention from both industry and academia thanks to its record-breaking performance...
Saved in:
Published in: | ACM transactions on software engineering and methodology 2023-09, Vol.32 (6), p.1-34, Article 154 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In the past few years, Transformer has been widely adopted in many domains and applications because of its impressive performance. Vision Transformer (ViT), a successful and well-known variant, attracts considerable attention from both industry and academia thanks to its record-breaking performance in various vision tasks. However, ViT is also highly nonlinear like other classical neural networks and could be easily fooled by both natural and adversarial perturbations. This limitation could pose a threat to the deployment of ViT in the real industrial environment, especially in safety-critical scenarios. How to improve the robustness of ViT is thus an urgent issue that needs to be addressed. Among all kinds of robustness, patch robustness is defined as giving a reliable output when a random patch in the input domain is perturbed. The perturbation could be natural corruption, such as part of the camera lens being blurred. It could also be a distribution shift, such as an object that does not exist in the training data suddenly appearing in the camera. And in the worst case, there could be a malicious adversarial patch attack that aims to fool the prediction of a machine learning model by arbitrarily modifying pixels within a restricted region of an input image. This kind of attack is also called physical attack, as it is believed to be more real than digital attack. Although there has been some work on patch robustness improvement of Convolutional Neural Network, related studies on its counterpart ViT are still at an early stage as ViT is usually much more complex with far more parameters. It is harder to assess and improve its robustness, not to mention to provide a provable guarantee. In this work, we propose PatchCensor, aiming to certify the patch robustness of ViT by applying exhaustive testing. We try to provide a provable guarantee by considering the worst patch attack scenarios. Unlike empirical defenses against adversarial patches that may be adaptively breached, certified robust approaches can provide a certified accuracy against arbitrary attacks under certain conditions. However, existing robustness certifications are mostly based on robust training, which often requires substantial training efforts and the sacrifice of model performance on normal samples. To bridge the gap, PatchCensor seeks to improve the robustness of the whole system by detecting abnormal inputs instead of training a robust model and asking it to give reliable results for ev |
---|---|
ISSN: | 1049-331X 1557-7392 |
DOI: | 10.1145/3591870 |