Where to Look When Repairing Code? Comparing the Attention of Neural Models and Developers

05/12/2023
by   Dominik Huber, et al.
0

Neural network-based techniques for automated program repair are becoming increasingly effective. Despite their success, little is known about why they succeed or fail, and how their way of reasoning about the code to repair compares to human developers. This paper presents the first in-depth study comparing human and neural program repair. In particular, we investigate what parts of the buggy code humans and two state of the art neural repair models focus on. This comparison is enabled by a novel attention-tracking interface for human code editing, based on which we gather a dataset of 98 bug fixing sessions, and on the attention layers of neural repair models. Our results show that the attention of the humans and both neural models often overlaps (0.35 to 0.44 correlation). At the same time, the agreement between humans and models still leaves room for improvement, as evidenced by the higher human-human correlation of 0.56. While the two models either focus mostly on the buggy line or on the surrounding context, the developers adopt a hybrid approach that evolves over time, where 36.8 the rest to the context. Overall, we find the humans to still be clearly more effective at finding a correct fix, with 67.3 predicted patches. The results and data of this study are a first step into a deeper understanding of the internal process of neural program repair, and offer insights inspired by the behavior of human developers on how to further improve neural repair models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset