1. Explainable Artificial Intelligence for Cybersecurity.
- Author
-
Sharma, Deepak Kumar, Mishra, Jahanavi, Singh, Aeshit, Govil, Raghav, Srivastava, Gautam, and Lin, Jerry Chun-Wei
- Subjects
- *
ARTIFICIAL intelligence , *INTERNET security , *TRUST , *MACHINE learning - Abstract
Recently, numerous Machine Learning (ML) algorithms have been applied in many areas of cybersecurity. However, most of these systems can only be seen as a black box to users. To improve our understanding of such systems, adversarial machine learning approaches can be used. The main features are detected by analyzing the extent of such changes, which helps in identifying the main reasons for misclassification. In this paper, the presented approach has obtained satisfactory results that accurately explains the reasons for misclassifications. Some features of the presented method can be applied to any classifier with defined gradients without the need for modifications. The proposed model can be extended to perform more diagnoses and it can be used for a deeper analysis of systems, obtaining more than 95% accuracy classification on the used datasets in the experiments. [Display omitted] • Explains misclassifications by data-driven AI models using an adversarial approach. • Compute the minimum number of changes to input features required. • Increased average classification accuracy by 2.5% post modification. • Designed a black-box attack to test the correctness and trustworthiness. • Used explanation maps to examine the effectiveness of attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF