Loading…
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems
We show that end-to-end learning of communication systems through deep neural network autoencoders can be extremely vulnerable to physical adversarial attacks. Specifically, we elaborate how an attacker can craft effective physical black-box adversarial attacks. Due to the openness (broadcast nature...
Saved in:
Published in: | IEEE communications letters 2019-05, Vol.23 (5), p.847-850 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We show that end-to-end learning of communication systems through deep neural network autoencoders can be extremely vulnerable to physical adversarial attacks. Specifically, we elaborate how an attacker can craft effective physical black-box adversarial attacks. Due to the openness (broadcast nature) of the wireless channel, an adversary transmitter can increase the block-error-rate of a communication system by orders of magnitude by transmitting a well-designed perturbation signal over the channel. We reveal that the adversarial attacks are more destructive than the jamming attacks. We also show that classical coding schemes are more robust than the autoencoders against both adversarial and jamming attacks. |
---|---|
ISSN: | 1089-7798 1558-2558 1558-2558 |
DOI: | 10.1109/LCOMM.2019.2901469 |