Loading…
Black-box attacks against log anomaly detection with adversarial examples
Deep neural networks (DNNs) have been widely employed to solve log anomaly detection and outperform a range of conventional methods. They have attained such striking success because they can usually explore and extract semantic information from a large volume of log data, which helps to infer comple...
Saved in:
Published in: | Information sciences 2023-01, Vol.619, p.249-262 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep neural networks (DNNs) have been widely employed to solve log anomaly detection and outperform a range of conventional methods. They have attained such striking success because they can usually explore and extract semantic information from a large volume of log data, which helps to infer complex log anomaly patterns more accurately. Despite its success in generalization accuracy, this data-driven approach can still suffer from a high vulnerability to adversarial attacks, which severely limits its practical use. To address this issue, several studies have proposed anomaly detectors to equip neural networks to improve their robustness. These anomaly detectors are built based on effective adversarial attack methods. Therefore, effective adversarial attack approaches are important for developing more efficient anomaly detectors, thereby improving neural network robustness. In this study, we propose two strong and effective black-box attackers, an attention-based and a gradient-based attacker, to defeat three target systems: MLP, AutoEncoder, and DeepLog.
Our approach facilitates the generation of more effective adversarial examples with the help of the analysis of vulnerable logkeys. The proposed attention-based attacker leverages attention weights to achieve vulnerable logkeys and derive adversarial examples, which are implemented using our previously developed attention-based convolutional neural network model. The proposed gradient-based attacker calculates gradients based on potential vulnerable logkeys to seek an optimal adversarial sample. The experimental results showed that these two approaches significantly outperformed the state-of-the-art attacker model log anomaly mask (LAM). In particular, owing to its optimization, the proposed gradient-based attacker approach can significantly increase the misclassification rate on three target models, yields a 70% successful attack rate on DeepLog and greatly exceeds the baseline by 52%. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2022.11.007 |