Majority Rule: better patching via Self-Consistency

05/31/2023
by   Toufique Ahmed, et al.
0

Large Language models (LLMs) can be induced to solve non-trivial problems with "few-shot" prompts including illustrative problem-solution examples. Now if the few-shots also include "chain of thought" (CoT) explanations, which are of the form problem-explanation-solution, LLMs will generate a "explained" solution, and perform even better. Recently an exciting, substantially better technique, self-consistency [1] (S-C) has emerged, based on the intuition that there are many plausible explanations for the right solution; when the LLM is sampled repeatedly to generate a pool of explanation-solution pairs, for a given problem, the most frequently occurring solutions in the pool (ignoring the explanations) tend to be even more likely to be correct! Unfortunately, the use of this highly-performant S-C (or even CoT) approach in software engineering settings is hampered by the lack of explanations; most software datasets lack explanations. In this paper, we describe an application of the S-C approach to program repair, using the commit log on the fix as the explanation, only in the illustrative few-shots. We achieve state-of-the art results, beating previous approaches to prompting-based program repair, on the MODIT dataset; we also find evidence suggesting that the correct commit messages are helping the LLM learn to produce better patches.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/07/2023

CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification

Chain-of-thought (CoT) prompting enables large language models (LLMs) to...
research
05/07/2023

Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting

Large Language Models (LLMs) can achieve strong performance on many task...
research
05/12/2023

ZARA: Improving Few-Shot Self-Rationalization for Small Language Models

Language models (LMs) that jointly generate end-task answers as well as ...
research
04/05/2022

Can language models learn from explanations in context?

Large language models can perform new tasks by adapting to a few in-cont...
research
07/30/2022

On Interactive Explanations as Non-Monotonic Reasoning

Recent work shows issues of consistency with explanations, with methods ...
research
07/11/2023

Explaining Competitive-Level Programming Solutions using LLMs

In this paper, we approach competitive-level programming problem-solving...
research
11/01/2019

What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments

Explanations are central to everyday life, and are a topic of growing in...

Please sign up or login with your details

Forgot password? Click here to reset