Loading…
Text Adversarial Attacks and Defenses: Issues, Taxonomy, and Perspectives
Deep neural networks (DNNs) have been widely used in many fields due to their powerful representation learning capabilities. However, they are exposed to serious threats caused by the increasing security issues. Adversarial examples were early discovered in computer vision (CV) field when the models...
Saved in:
Published in: | Security and communication networks 2022-04, Vol.2022, p.1-25 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep neural networks (DNNs) have been widely used in many fields due to their powerful representation learning capabilities. However, they are exposed to serious threats caused by the increasing security issues. Adversarial examples were early discovered in computer vision (CV) field when the models were fooled by perturbing the original inputs, and they also exist in natural language processing (NLP) community. However, unlike the image, the text is discrete and semantic in nature, making the generation of adversarial attacks even more difficult. In this work, we provide a comprehensive overview of adversarial attacks and defenses in the textual domain. First, we introduce the pipeline of NLP, including the vector representations of text, DNN-based victim models, and a formal definition of adversarial attacks, which makes our review self-contained. Second, we propose a novel taxonomy for the existing adversarial attacks and defenses, which is fine-grained and closely aligned with practical applications. Finally, we summarize and discuss the major existing issues and further research directions of text adversarial attacks and defenses. |
---|---|
ISSN: | 1939-0114 1939-0122 |
DOI: | 10.1155/2022/6458488 |