Back to Search Start Over

An Empirical Study of Deep Learning Models for Vulnerability Detection

Authors :
Steenhoek, Benjamin
Rahman, Md Mahbubur
Jiles, Richard
Le, Wei
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

Deep learning (DL) models of code have recently reported great progress for vulnerability detection. In some cases, DL-based models have outperformed static analysis tools. Although many great models have been proposed, we do not yet have a good understanding of these models. This limits the further advancement of model robustness, debugging, and deployment for the vulnerability detection. In this paper, we surveyed and reproduced 9 state-of-the-art (SOTA) deep learning models on 2 widely used vulnerability detection datasets: Devign and MSR. We investigated 6 research questions in three areas, namely model capabilities, training data, and model interpretation. We experimentally demonstrated the variability between different runs of a model and the low agreement among different models' outputs. We investigated models trained for specific types of vulnerabilities compared to a model that is trained on all the vulnerabilities at once. We explored the types of programs DL may consider "hard" to handle. We investigated the relations of training data sizes and training data composition with model performance. Finally, we studied model interpretations and analyzed important features that the models used to make predictions. We believe that our findings can help better understand model results, provide guidance on preparing training data, and improve the robustness of the models. All of our datasets, code, and results are available at https://doi.org/10.6084/m9.figshare.20791240.<br />Comment: 12 pages, 14 figures. Accepted at ICSE 2023. Camera-ready version

Details

ISSN :
20791240
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....b2ee231239597798cfc76fefd937851a
Full Text :
https://doi.org/10.48550/arxiv.2212.08109