Back to Search Start Over

Evaluating and Improving the Robustness of Security Attack Detectors Generated by LLMs

Authors :
Pasini, Samuele
Kim, Jinhan
Aiello, Tommaso
Lozoya, Rocio Cabrera
Sabetta, Antonino
Tonella, Paolo
Publication Year :
2024

Abstract

Large Language Models (LLMs) are increasingly used in software development to generate functions, such as attack detectors, that implement security requirements. However, LLMs struggle to generate accurate code, resulting, e.g., in attack detectors that miss well-known attacks when used in practice. This is most likely due to the LLM lacking knowledge about some existing attacks and to the generated code being not evaluated in real usage scenarios. We propose a novel approach integrating Retrieval Augmented Generation (RAG) and Self-Ranking into the LLM pipeline. RAG enhances the robustness of the output by incorporating external knowledge sources, while the Self-Ranking technique, inspired to the concept of Self-Consistency, generates multiple reasoning paths and creates ranks to select the most robust detector. Our extensive empirical study targets code generated by LLMs to detect two prevalent injection attacks in web security: Cross-Site Scripting (XSS) and SQL injection (SQLi). Results show a significant improvement in detection performance compared to baselines, with an increase of up to 71%pt and 37%pt in the F2-Score for XSS and SQLi detection, respectively.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.18216
Document Type :
Working Paper