Loading…
Adversarial attacks on a lexical sentiment analysis classifier
Social media has become a relevant information source for several decision-making processes and for the definition of business strategies. As various sentiment analysis techniques are used to transform collected data into intelligence information, the sentiment classifiers used in these collection e...
Saved in:
Published in: | Computer communications 2021-06, Vol.174, p.154-171 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Social media has become a relevant information source for several decision-making processes and for the definition of business strategies. As various sentiment analysis techniques are used to transform collected data into intelligence information, the sentiment classifiers used in these collection environments must be carefully studied and observed before being considered trustful and ready to be installed in decision support systems. An important research area concerns the robustness of sentiment classifiers in view of new adversarial attacks, in which small perturbations may be created by malicious users to deceive the sentiment classifiers, generating a perception different from the one that should be observed in the environment. Thus, it is important to identify and analyze the vulnerabilities of these classifiers under different strategies of adversarial attacks to propose countermeasures that can be used to mitigate such attacks. In this context, this work presents adversarial attacks related to a lexical natural language classifier. Being the target of the attacks, this classifier is used to calculate the sentiment of collected data as posted by users in various social media applications. The results indicate that the found vulnerabilities, if exploited by malicious users in applications that use the same lexical classifier, could invert or cancel the classifiers’ perception, thus generating perceptions that do not correspond to the reality for decision making. This work also proposes some countermeasures that might mitigate the implemented attacks. |
---|---|
ISSN: | 0140-3664 1873-703X |
DOI: | 10.1016/j.comcom.2021.04.026 |