Loading…
Pre-trained language model-enhanced conditional generative adversarial networks for intrusion detection
As cyber threats continue to evolve, ensuring network security has become increasingly critical. Deep learning-based intrusion detection systems (IDS) are crucial for addressing this issue. However, imbalanced training data and limited feature extraction weaken classification performance for intrusi...
Saved in:
Published in: | Peer-to-peer networking and applications 2024-01, Vol.17 (1), p.227-245 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | As cyber threats continue to evolve, ensuring network security has become increasingly critical. Deep learning-based intrusion detection systems (IDS) are crucial for addressing this issue. However, imbalanced training data and limited feature extraction weaken classification performance for intrusion detection. This paper presents a conditional generative adversarial network (CGAN) enhanced by Bidirectional Encoder Representations from Transformers (BERT), a pre-trained language model, for multi-class intrusion detection. This approach augments minority attack data through CGAN to mitigate class imbalance. BERT with robust feature extraction is embedded into the CGAN discriminator to enhance input–output dependency and improve detection through adversarial training. Experiments show the proposed model outperforms baselines on CSE-CIC-IDS2018, NF-ToN-IoT-V2, and NF-UNSW-NB15-v2 datasets, achieving F1-scores of 98.230%, 98.799%, and 89.007%, respectively, and improving F1-scores over baselines by 1.218%
-
13.844% 0.215%
-
13.779%, and 2.056%
-
22.587%. |
---|---|
ISSN: | 1936-6442 1936-6450 |
DOI: | 10.1007/s12083-023-01595-6 |