Loading…

Promoting Robustness of Randomized Smoothing: Two Cost-Effective Approaches

Randomized smoothing has recently attracted attentions in the field of adversarial robustness to provide provable robustness guarantees on smoothed neural network classifiers. However, existing works show that vanillarandomized smoothing usually does not provide good robustness performance and often...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Linbo, Hoang, Trong Nghia, Nguyen, Lam M., Weng, Tsui-Wei
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Randomized smoothing has recently attracted attentions in the field of adversarial robustness to provide provable robustness guarantees on smoothed neural network classifiers. However, existing works show that vanillarandomized smoothing usually does not provide good robustness performance and often requires (re)training techniques on the base classifier in order to boost the robustness of the resulting smoothed classifier. In this work, we propose two cost-effective approaches to boost the robustness of randomized smoothing while preserving its clean performance. The first approach introduces a new robust training method AdvMacer which combines adversarial training and robustness certification maximization for randomized smoothing. We show that AdvMacer can improve the robustness performance of randomized smoothing classifiers compared to SOTA baselines, while being 3x faster to train than MACER baseline. The second approach introduces a post-processing method EsbRS which greatly improves the robustness certificate based on building model ensembles. Extensive experiments verify the superior performance of our methods on various datasets. Our code is available at https://github.com/Trustworthy-ML-Lab/AdvMacer_and_EsbRS.
ISSN:2374-8486
DOI:10.1109/ICDM58522.2023.00139