Loading…
Open DNN Box by Power Side-Channel Attack
Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the bl...
Saved in:
Published in: | IEEE transactions on circuits and systems. II, Express briefs Express briefs, 2020-11, Vol.67 (11), p.2717-2721 |
---|---|
Main Authors: | , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep neural networks are becoming popular and important assets of many AI companies. However, recent studies indicate that they are also vulnerable to adversarial attacks. Adversarial attacks can be either white-box or black-box. The white-box attacks assume full knowledge of the models while the black-box ones assume none. In general, revealing more internal information can enable much more powerful and efficient attacks. However, in most real-world applications, the internal information of embedded AI devices is unavailable. Therefore, in this brief, we propose a side-channel information based technique to reveal the internal information of black-box models. Specifically, we have made the following contributions: (1) different from previous works, we use side-channel information to reveal internal network architecture in embedded devices; (2) we construct models for internal parameter estimation that no research has been reached yet; and (3) we validate our methods on real-world devices and applications. The experimental results show that our method can achieve 96.50% accuracy on average. Such results suggest that we should pay strong attention to the security problem of many AI devices, and further propose corresponding defensive strategies in the future. |
---|---|
ISSN: | 1549-7747 1558-3791 |
DOI: | 10.1109/TCSII.2020.2973007 |