Back to Search Start Over

Bayesian Learned Models Can Detect Adversarial Malware For Free

Authors :
Doan, Bao Gia
Nguyen, Dang Quang
Montague, Paul
Abraham, Tamas
De Vel, Olivier
Camtepe, Seyit
Kanhere, Salil S.
Abbasnejad, Ehsan
Ranasinghe, Damith C.
Publication Year :
2024

Abstract

The vulnerability of machine learning-based malware detectors to adversarial attacks has prompted the need for robust solutions. Adversarial training is an effective method but is computationally expensive to scale up to large datasets and comes at the cost of sacrificing model performance for robustness. We hypothesize that adversarial malware exploits the low-confidence regions of models and can be identified using epistemic uncertainty of ML approaches -- epistemic uncertainty in a machine learning-based malware detector is a result of a lack of similar training samples in regions of the problem space. In particular, a Bayesian formulation can capture the model parameters' distribution and quantify epistemic uncertainty without sacrificing model performance. To verify our hypothesis, we consider Bayesian learning approaches with a mutual information-based formulation to quantify uncertainty and detect adversarial malware in Android, Windows domains and PDF malware. We found, quantifying uncertainty through Bayesian learning methods can defend against adversarial malware. In particular, Bayesian models: (1) are generally capable of identifying adversarial malware in both feature and problem space, (2) can detect concept drift by measuring uncertainty, and (3) with a diversity-promoting approach (or better posterior approximations) lead to parameter instances from the posterior to significantly enhance a detectors' ability.<br />Comment: Accepted to the 29th European Symposium on Research in Computer Security (ESORICS) 2024 Conference

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.18309
Document Type :
Working Paper