Reasoning Circuits: Few-shot Multihop Question Generation with Structured Rationales

11/15/2022
by   Saurabh Kulshreshtha, et al.
0

Multi-hop Question Generation is the task of generating questions which require the reader to reason over and combine information spread across multiple passages using several reasoning steps. Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks and make model predictions more interpretable. However, few-shot performance gains from including rationales have been largely observed only in +100B language models, and otherwise require large scale manual rationale annotation. In this work, we introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime (8- to 128-shot). We propose to annotate a small number of examples following our proposed multi-step rationale schema, treating each reasoning step as a separate task to be performed by a generative language model. We show that our framework leads to improved control over the difficulty of the generated questions and better performance compared to baselines trained without rationales, both on automatic evaluation metrics and in human evaluation. Importantly, we show that this is achievable with a modest model size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/25/2023

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

Language models still struggle on moral reasoning, despite their impress...
research
05/23/2023

ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models

Although large language models (LLMs) have achieved excellent performanc...
research
11/02/2020

Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps

A multi-hop question answering (QA) dataset aims to test reasoning and i...
research
07/02/2022

Rationale-Augmented Ensembles in Language Models

Recent research has shown that rationales, or step-by-step chains of tho...
research
05/17/2023

Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling

We introduce Reprompting, an iterative sampling algorithm that searches ...
research
06/10/2023

Human-in-the-Loop through Chain-of-Thought

While the emergence of powerful language models along with Chain-of-thou...
research
08/09/2023

Answering Unseen Questions With Smaller Language Models Using Rationale Generation and Dense Retrieval

When provided with sufficient explanatory context, smaller Language Mode...

Please sign up or login with your details

Forgot password? Click here to reset