Improving automatically generated code from Codex via Automated Program Repair

by   Zhiyu Fan, et al.

Large language models, e.g., Codex and AlphaCode, have shown capability in producing working code for many programming tasks. However, the success rate of existing models remains low, especially for complex programming tasks. One of the reasons is that language models lack awareness of program semantics (e.g., type information), resulting in incorrect programs (or even programs which do not compile). In this paper, we systematically study whether automated program repair (APR) techniques can fix the incorrect solutions produced by language models in LeetCode contests. The goal is to study whether APR techniques can enhance confidence in the code produced by language models. Our study revealed that: (1) automatically generated codes share some common programming mistakes with human-crafted solutions, indicating existing APR tools have the potential to fix auto-generated code; (2) TBar and Recoder, two well-known Java APR tools based on templates and learning respectively, increase the number of solved tasks from 37 to 42 on 60 easy level tasks, while increase from 5 to 9 on 53 medium-level programming tasks; (3) given bug location information provided by a statistical fault localization approach, the newly released Codex edit mode, which supports changing existing code, may outperform existing APR tools in fixing incorrect solutions. By analyzing the experimental results generated by these tools, we provide several suggestions on how to improve current APR tools.


FAPR: Fast and Accurate Program Repair for Introductory Programming Courses

In introductory programming courses, it is challenging for instructors t...

Large Language Models in Fault Localisation

Large Language Models (LLMs) have shown promise in multiple software eng...

Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic?

Language models are promising solutions for tackling increasing complex ...

Explainable Automated Debugging via Large Language Model-driven Scientific Debugging

Automated debugging techniques have the potential to reduce developer ef...

Fully Autonomous Programming with Large Language Models

Current approaches to program synthesis with Large Language Models (LLMs...

FLACK: Counterexample-Guided Fault Localization for Alloy Models

Fault localization is a practical research topic that helps developers i...

A Note About: Critical Review of BugSwarm for Fault Localization and Program Repair

Datasets play an important role in the advancement of software tools and...

Please sign up or login with your details

Forgot password? Click here to reset