Back to Search Start Over

ShadowBug: Enhanced Synthetic Fuzzing Benchmark Generation

Authors :
Zhengxiang Zhou
Cong Wang
Source :
IEEE Open Journal of the Computer Society, Vol 5, Pp 95-106 (2024)
Publication Year :
2024
Publisher :
IEEE, 2024.

Abstract

Fuzzers have proven to be a vital tool in identifying vulnerabilities. As an area of active research, there is a constant drive to improve fuzzers, and it is equally important to improve benchmarks used to evaluate their performance alongside evolving heuristics. Current research has primarily focused on using CVE bugs as benchmarks, with synthetic benchmarks receiving less attention due to concerns about overfitting specific fuzzing heuristics. In this paper, we introduce ShadowBug, a new methodology that generates enhanced synthetic bugs. In contrast to existing synthetic benchmarks, our approach involves well-arranged bugs that fit specific distributions by quantifying the constraint-solving difficulty of each block. We also uncover implicit constraints of real-world bugs that prior research has overlooked and develop an integer-overflow-based transformation from normal constraints to their implicit forms. We construct a synthetic benchmark and evaluate it against five prominent fuzzers. The experiments reveal that 391 out of 466 bugs were detected, which confirms the practicality and effectiveness of our methodology. Additionally, we introduce a finer-grained evaluation metric called “bug difficulty,” which sheds more light on their heuristic strengths with regard to constraint-solving and bug exploitation. The results of our study have practical implications for future fuzzer evaluation methods.

Details

Language :
English
ISSN :
26441268
Volume :
5
Database :
Directory of Open Access Journals
Journal :
IEEE Open Journal of the Computer Society
Publication Type :
Academic Journal
Accession number :
edsdoj.06f3e8058d694fcc98d7ebe7d7cd915a
Document Type :
article
Full Text :
https://doi.org/10.1109/OJCS.2024.3378384