Loading…

An explainable deep learning-enabled intrusion detection framework in IoT networks

Although the field of eXplainable Artificial Intelligence (XAI) has a significant interest these days, its implementation within cyber security applications still needs further investigation to understand its effectiveness in discovering attack surfaces and vectors. In cyber defence, especially anom...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences 2023-08, Vol.639, p.119000, Article 119000
Main Authors: Keshk, Marwa, Koroniotis, Nickolaos, Pham, Nam, Moustafa, Nour, Turnbull, Benjamin, Zomaya, Albert Y.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Although the field of eXplainable Artificial Intelligence (XAI) has a significant interest these days, its implementation within cyber security applications still needs further investigation to understand its effectiveness in discovering attack surfaces and vectors. In cyber defence, especially anomaly-based Intrusion Detection Systems (IDS), the emerging applications of machine/deep learning models require the interpretation of the models' architecture and the explanation of models' prediction to examine how cyberattacks would occur. This paper proposes a novel explainable intrusion detection framework in the Internet of Things (IoT) networks. We have developed an IDS using a Short-Term Long Memory (LSTM) model to identify cyberattacks and explain the model's decisions. This uses a novel set of input features extracted by a novel SPIP (S: Shapley Additive exPlanations, P: Permutation Feature Importance, I: Individual Conditional Expectation, P: Partial Dependence Plot) framework to train and evaluate the LSTM model. The framework was validated using the NSL-KDD, UNSW-NB15 and TON_IoT datasets. The SPIP framework achieved high detection accuracy, processing time, and high interpretability of data features and model outputs compared with other peer techniques. The proposed framework has the potential to assist administrators and decision-makers in understanding complex attack behaviour. •We propose a novel explainable deep learning-based intrusion detection method that provides global and local explanations.•We use input features extracted by the proposed framework to train and evaluate the proposed intrusion detection method.•We demonstrate the proposed framework's ability to effectively enhance the interpretability of cyber defence systems in IoT networks.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2023.119000