DeepMutants: Training neural bug detectors with contextual mutations
Learning-based bug detectors promise to find bugs in large code bases by exploiting natural hints such as names of variables and functions or comments. Still, existing techniques tend to underperform when presented with realistic bugs. We believe bug detector learning to currently suffer from a lack of realistic defective training examples. In fact, real world bugs are scarce which has driven existing methods to train on artificially created and mostly unrealistic mutants. In this work, we propose a novel contextual mutation operator which incorporates knowledge about the mutation context to dynamically inject natural and more realistic faults into code. Our approach employs a masked language model to produce a context-dependent distribution over feasible token replacements. The evaluation shows that sampling from a language model does not only produce mutants which more accurately represent real bugs but also lead to better performing bug detectors, both on artificial benchmarks and on real world source code.
READ FULL TEXT