Loading…

SAR: Sharpness-Aware minimization for enhancing DNNs’ Robustness against bit-flip errors

As Deep Neural Networks (DNNs) are increasingly deployed in safety-critical scenarios, there is a growing need to address bit-flip errors occurring in hardware, such as memory. These errors can lead to changes in DNN weights, potentially degrading the performance of deployed models and causing catas...

Full description

Saved in:
Bibliographic Details
Published in:Journal of systems architecture 2024-11, Vol.156, p.103284, Article 103284
Main Authors: Zhou, Changbao, Du, Jiawei, Yan, Ming, Yue, Hengshan, Wei, Xiaohui, Zhou, Joey Tianyi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As Deep Neural Networks (DNNs) are increasingly deployed in safety-critical scenarios, there is a growing need to address bit-flip errors occurring in hardware, such as memory. These errors can lead to changes in DNN weights, potentially degrading the performance of deployed models and causing catastrophic consequences. Existing methods improve DNNs’ fault tolerance or robustness by modifying network size, structure, or inference and training processes. Unfortunately, these methods often enhance robustness at the expense of clean accuracy and introduce additional overhead during inference. To address these issues, we propose Sharpness-Aware Minimization for enhancing DNNs’ Robustness against bit-flip errors (SAR), which aims to leverage the intrinsic robustness of DNNs. We begin with a comprehensive investigation of DNNs under bit-flip errors, yielding insightful observations regarding the intensity and occurrence of such errors. Based on these insights, we identify that Sharpness-Aware Minimization (SAM) has the potential to enhance DNN robustness. We further analyze this potential through the relationship between SAM formulation and our observations, building a robustness-enhancing framework based on SAM. Experimental validation across various models and datasets demonstrates that SAR can effectively improve DNN robustness against bit-flip errors without sacrificing clean accuracy or introducing additional inference costs, making it a “double-win” method compared to existing approaches. •The first work to uncover the DNNs’ intrinsic robustness against bit-flip errors.•The first work to adopt Sharpness-Aware Minimization to resist bit-flip errors.•A valuable and lightweight framework for security-critical scenarios.
ISSN:1383-7621
DOI:10.1016/j.sysarc.2024.103284