A Deep Reinforcement Learning Approach to Modelling an Intrusion Detection System Using Asynchronous Advantage Actor-Critic (A3C) Algorithm
Junior Kiplimo Yego, Dr. Nicholas Kiget & Mr. Daniel Samoei
Department of Information Technology
Moi University, Kenya
Corresponding Author: Junkiy62@gmail.com
Abstract: An increase in growth and use of the internet has also resulted in attacks evolving and more novel attacks having a devastating effect are witnessed. The Intrusion Detection System (IDS) is yet to achieve maximum success due to false positives and low detection. The purpose of the study was to determine the modelling of an intrusion detection system using the Asynchronous Advantage Actor-Critic (A3C) Algorithm. In this paper we look at the following: (i) To evaluate the current machine learning techniques being used in IDS, (ii) To determine the effectiveness of using the Asynchronous Advantage Actor-Critic algorithm in anomaly detection, (iii) To select the appropriate training data set and prepare for use on A3C. A conceptual study was done in looking at these objectives. The UNSW_TRAIN and UNSW_TEST were samples selected by purposive sampling from the whole population of UNSW-NB15 dataset. Analysis of the dataset was done using Python. Key findings were that anomaly detection approach is the best approach due to its ability to detect novel attacks. Also, there is need to continue research on intrusion detection and improve solutions to the problem of false positives and fully optimize on accuracy. The UNSW-NB15 dataset is comprehensive and so all the attack types should be used so as to accurately depict the intrusions and should selected attack types be used, feature selection should be done accurately so as to reflect modern attack types.