Back to Search Start Over

A Study on the Effectiveness of Interpretable Machine Learning Explanations in Cybersecurity

Authors :
Frank Xavier Gearhart
Source :
ProQuest LLC. 2024Ph.D. Dissertation, Northcentral University.
Publication Year :
2024

Abstract

Digital systems are pervasive in modern societies -- augmenting personal and commercial driving, detecting cancer, and exploiting transitory events in financial markets. Attacks on these systems are growing in number, sophistication, and impact. Current cyber defenses are proving inadequate against some of these attacks. Cyber defense tools that use machine learning and artificial intelligence are being fielded. However, a significant challenge to their adoption is that they do not provide understandable explanations for their output, and a lack of understandable explanations inhibits the trust needed for effective human-machine teams. This study surveyed active cybersecurity practitioners to determine to what level differences in academic education, domain experience, or domain knowledge affected how they viewed a machine learning-generated malware explanation. The National Institute of Standards and Technology's (NIST) "Artificial Intelligence Risk Management Framework," along with the NIST "Four Principles of Explainable Artificial Intelligence (XAI)," was the theoretical framework used in this study. A random forest machine learning model was trained on the UNSW-NB15 malware dataset using Python v3.10. The Local Interpretable Model-agnostic Explanation (LIME) Python library was used to generate explanations from the random forest model. A representative ChatGPT explanation was selected and used in the survey. The survey consisted of nine five-point Likert scale questions based on the ML explanation and the NIST XAI principles. While the number of received survey responses was less than that required for the expected effect strength, power, and alpha, the results indicated that cybersecurity experience and knowledge are correlated with how machine-learning-generated explanations are viewed by cybersecurity practitioners. This study may help develop trustworthy machine-learning-based cybersecurity tools to respond to the growing number and sophistication of cyber threats. [The dissertation citations contained here are published with the permission of ProQuest LLC. Further reproduction is prohibited without permission. Copies of dissertations may be obtained by Telephone (800) 1-800-521-0600. Web page: http://www.proquest.com/en-US/products/dissertations/individuals.shtml.]

Details

Language :
English
ISBN :
979-83-8168-051-5
ISBNs :
979-83-8168-051-5
Database :
ERIC
Journal :
ProQuest LLC
Publication Type :
Dissertation/ Thesis
Accession number :
ED645880
Document Type :
Dissertations/Theses - Doctoral Dissertations