Evasion-aware botnet detection using artificial intelligence

Randhawa, Rizwan Hamid (2023) Evasion-aware botnet detection using artificial intelligence. Doctoral thesis, Northumbria University.

Text (Doctoral thesis)
randhawa.rizwan_phd(18023833).pdf - Submitted Version

Download (8MB) | Preview


Adversarial evasions are modern threats to Machine Learning (ML) based applications. Due to the vulnerabilities in the classic ML inference systems, botnet detectors are equally likely to be attacked using adversarial examples. The evasions can be generated using sophisticated attack strategies using complex AI models. Generative AI models are one of the candidates to spawn evasion attacks. The other significant concern is the data scarcity due to which ML classifiers become biased toward the majority class samples during training. This research proposes novel techniques to improve the detection accuracy of botnet classifiers to mitigate the effects of adversarial evasion and data scarcity. The ultimate goal is to design a sophisticated botnet detector that is adversarially aware in low data regimes. First, the technical background of the research is mentioned to help understand the problem and the potential solutions. Second, a Generative Adversarial Network (GAN) based model called Botshot is proposed to address the dataset imbalance issues using Adversarial Training (AT). Botshot gives promising results in contrast to the most voguish ML classifiers that can be fooled by adversarial samples by 100%. Botshot improves the detection accuracy up to 99.74% after AT as the best case scenario in one of the botnet datasets used. Third, an evasion-aware model called Evasion Generative Adversarial Network (EVAGAN) for botnet detection in low data regimes is presented. EVAGAN outperforms the state-of-the-art Auxiliary Classifier Generative Adversarial Network (ACGAN) in detection performance, stability, and time complexity. Last, an improved version of EVAGAN called deep Reinforcement Learning based Generative Adversarial Network (RELEVAGAN) is bid to make the EVAGAN further hardened against evasion attacks and preserve the attack sample semantics. The rationale is proactively attacking the detection model with a Deep Reinforcement Learning (DRL) agent to know the possible unseen functionality-preserving adversarial evasions. The attacks are conducted during the EVAGAN training to adjust the network weights for learning perturbations crafted by the agent. RELEVAGAN, like its parent model, does not require AT for the ML classifiers since it can act as an adversarial-aware botnet detection model. The experimental results show the RELEVAGAN’s supremacy over EVAGAN in terms of early convergence. RELEVAGAN is one step further toward designing an evasion aware and functionalitypreserving botnet detection model that tends to be sustainable with evolving botnets with the help of DRL.

Item Type: Thesis (Doctoral)
Uncontrolled Keywords: generative adversarial network, cyber security, adversarial attacks
Subjects: G400 Computer Science
G700 Artificial Intelligence
Department: Faculties > Engineering and Environment > Computer and Information Sciences
University Services > Graduate School > Doctor of Philosophy
Depositing User: John Coen
Date Deposited: 15 May 2023 08:16
Last Modified: 15 May 2023 08:30
URI: https://nrl.northumbria.ac.uk/id/eprint/51569

Actions (login required)

View Item View Item


Downloads per month over past year

View more statistics