Research Area:  Machine Learning
With the advent of the Internet of Things (IoT), network attacks have become more diverse and intelligent. In order to ensure the security of the network, Intrusion Detection system (IDS) has become very important. However, when met with the adversarial examples, IDS has itself become no longer secure, and the attackers can increase the success rate of attacks by misleading IDS. Therefore, it is necessary to improve the robustness of the IDS. In this paper, we employ Fast Gradient Sign Method (FGSM) to generate adversarial examples to test the robustness of three intrusion detection models based on convolutional neural network (CNN), long short-term memory (LSTM), and gated recurrent unit (GRU). We employ three training methods: the first is to train the models with normal examples, the second is to train the models directly with adversarial examples, and the last is to pretrain the models with normal examples, and then employ adversarial examples to train the models. We evaluate the performance of the three models under different training methods, and find that under normal training method, CNN is the most robust model to adversarial examples. After adversarial training, the robustness of GRU and LSTM to adversarial examples has greatly been improved.
Author(s) Name:  Xingbing Fu, Nan Zhou, Libin Jiao, Haifeng Li & Jianwu Zhang
Journal name:  Annals of Telecommunications
Publisher name:  Springer
Volume Information:  volume 76, pages 273–285 (2021)
Paper Link:   https://link.springer.com/article/10.1007/s12243-021-00854-y