Loading…
A Robust Deep-Learning-Enabled Trust-Boundary Protection for Adversarial Industrial IoT Environment
In recent years, trust-boundary protection has become a challenging problem in Industrial Internet of Things (IIoT) environments. Trust boundaries separate IIoT processes and data stores in different groups based on user access privilege. Points where dataflow intersects with the trust boundary are...
Saved in:
Published in: | IEEE internet of things journal 2021-06, Vol.8 (12), p.9611-9621 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, trust-boundary protection has become a challenging problem in Industrial Internet of Things (IIoT) environments. Trust boundaries separate IIoT processes and data stores in different groups based on user access privilege. Points where dataflow intersects with the trust boundary are becoming entry points for attackers. Attackers use various model skewing and intelligent techniques to generate adversarial/noisy examples that are indistinguishable from natural data. Many of the existing machine-learning (ML)-based approaches attempt to circumvent this problem. However, owing to an extremely large attack surface in the IIoT network, capturing a true distribution during training is difficult. The standard generative adversarial network (GAN) commonly generates adversarial examples for training using randomly sampled noise. However, the distribution of noisy inputs of GAN largely differs from actual distribution of data in IIoT networks and shows less robustness against adversarial attacks. Therefore, in this article, we propose a downsampler-encoder-based cooperative data generator that is trained using an algorithm to ensure better capture of the actual distribution of attack models for the large IIoT attack surface. The proposed downsampler-based data generator is alternatively updated and verified during training using a deep neural network discriminator to ensure robustness. This guarantees the performance of the generator against input sets with a high noise level at time of training and testing. Various experiments are conducted on a real IIoT testbed data set. Experimental results show that the proposed approach outperforms conventional deep learning and other ML techniques in terms of robustness against adversarial/noisy examples in the IIoT environment. |
---|---|
ISSN: | 2327-4662 2327-4662 |
DOI: | 10.1109/JIOT.2020.3019225 |