Back to Search Start Over

Impacting Robustness in Deep Learning-Based NIDS through Poisoning Attacks.

Authors :
Alahmed, Shahad
Alasad, Qutaiba
Yuan, Jiann-Shiun
Alawad, Mohammed
Source :
Algorithms; Apr2024, Vol. 17 Issue 4, p155, 19p
Publication Year :
2024

Abstract

The rapid expansion and pervasive reach of the internet in recent years have raised concerns about evolving and adaptable online threats, particularly with the extensive integration of Machine Learning (ML) systems into our daily routines. These systems are increasingly becoming targets of malicious attacks that seek to distort their functionality through the concept of poisoning. Such attacks aim to warp the intended operations of these services, deviating them from their true purpose. Poisoning renders systems susceptible to unauthorized access, enabling illicit users to masquerade as legitimate ones, compromising the integrity of smart technology-based systems like Network Intrusion Detection Systems (NIDSs). Therefore, it is necessary to continue working on studying the resilience of deep learning network systems while there are poisoning attacks, specifically interfering with the integrity of data conveyed over networks. This paper explores the resilience of deep learning (DL)—based NIDSs against untethered white-box attacks. More specifically, it introduces a designed poisoning attack technique geared especially for deep learning by adding various amounts of altered instances into training datasets at diverse rates and then investigating the attack's influence on model performance. We observe that increasing injection rates (from 1% to 50%) and random amplified distribution have slightly affected the overall performance of the system, which is represented by accuracy (0.93) at the end of the experiments. However, the rest of the results related to the other measures, such as PPV (0.082), FPR (0.29), and MSE (0.67), indicate that the data manipulation poisoning attacks impact the deep learning model. These findings shed light on the vulnerability of DL-based NIDS under poisoning attacks, emphasizing the significance of securing such systems against these sophisticated threats, for which defense techniques should be considered. Our analysis, supported by experimental results, shows that the generated poisoned data have significantly impacted the model performance and are hard to be detected. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19994893
Volume :
17
Issue :
4
Database :
Complementary Index
Journal :
Algorithms
Publication Type :
Academic Journal
Accession number :
176878909
Full Text :
https://doi.org/10.3390/a17040155