Magma: A Ground-Truth Fuzzing Benchmark

09/02/2020
by   Ahmad Hazimeh, et al.
0

High scalability and low running costs have made fuzz testing the de facto standard for discovering software bugs. Fuzzing techniques are constantly being improved in a race to build the ultimate bug-finding tool. However, while fuzzing excels at finding bugs, evaluating and comparing fuzzer performance is challenging due to the lack of metrics and benchmarks. Crash count, the most common performance metric, is inaccurate due to imperfections in deduplication techniques. Moreover, the lack of a unified set of targets results in ad hoc evaluations that inhibit fair comparison. We tackle these problems by developing Magma, a ground-truth evaluation framework that enables uniform fuzzer evaluation and comparison. By introducing real bugs into real software, Magma allows for realistic evaluation of fuzzers against a broad set of targets. By instrumenting these bugs, Magma also enables the collection of bug-centric performance metrics independent of the fuzzer. Magma is an open benchmark consisting of seven targets that perform a variety of input manipulations and complex computations, presenting a challenge to state-of-the-art fuzzers. We evaluate six popular mutation-based greybox fuzzers (AFL, AFLFast, AFL++, FairFuzz, MOpt-AFL, and honggfuzz) against Magma over 200 000 CPU-hours. Based on the number of bugs reached, triggered, and detected, we draw conclusions about the fuzzers' exploration and detection capabilities. This provides insight into fuzzer performance evaluation, highlighting the importance of ground truth in performing more accurate and meaningful evaluations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset