Forward-Backward Reasoning in Large Language Models for Verification

08/15/2023
by   Weisen Jiang, et al.
0

Chain-of-Though (CoT) prompting has shown promising performance in various reasoning tasks. Recently, Self-Consistency <cit.> proposes to sample a diverse set of reasoning chains which may lead to different answers while the answer that receives the most votes is selected. In this paper, we propose a novel method to use backward reasoning in verifying candidate answers. We mask a token in the question by x and ask the LLM to predict the masked token when a candidate answer is provided by a simple template, i.e., "If we know the answer of the above question is {a candidate answer}, what is the value of unknown variable x?" Intuitively, the LLM is expected to predict the masked token successfully if the provided candidate answer is correct. We further propose FOBAR to combine forward and backward reasoning for estimating the probability of candidate answers. We conduct extensive experiments on six data sets and three LLMs. Experimental results demonstrate that FOBAR achieves state-of-the-art performance on various reasoning benchmarks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset