Loading…
MANomaly: Mutual adversarial networks for semi-supervised anomaly detection
In network intrusion detection, since the available attack traffic is much less than normal traffic, detecting attacks and intrusions from these unbalanced traffic can be a problem of semi-supervised learning, i.e., finding outliers (anomalies) from a data population that obeys a certain distributio...
Saved in:
Published in: | Information sciences 2022-09, Vol.611, p.65-80 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In network intrusion detection, since the available attack traffic is much less than normal traffic, detecting attacks and intrusions from these unbalanced traffic can be a problem of semi-supervised learning, i.e., finding outliers (anomalies) from a data population that obeys a certain distribution. In this paper, we design a novel network model named the mutual adversarial network (MAN), which has two identical reconstruction autoencoder (RecAE) subnetworks. In training, these two subnetworks use the proposed mutual adversarial training to learn the data distribution of normal traffic samples. In detection, we identify anomalies based on the residual values obtained after different samples are reconstructed by MAN. In addition, we devise a novel method to identify anomalies from anomaly scores named the high anomaly suppression (HAS) determination mechanism, which uses the mean values to suppress the effect of noisy data in the test sample. Then, we construct a novel semi-supervised reconstruction anomaly detection framework named MANomaly by combining MAN with the HAS determination mechanism. Meanwhile, we design three different mutual adversarial training approaches to MANomaly and evaluate them on two publicly available network traffic datasets: NSL-KDD and UNSW-NB15. Experimental results show that our method achieves excellent performance by using only 5% of normal training data. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2022.08.033 |