Learning to Select from Multiple Options

by   Jiangshu Du, et al.

Many NLP tasks can be regarded as a selection problem from a set of options, such as classification tasks, multi-choice question answering, etc. Textual entailment (TE) has been shown as the state-of-the-art (SOTA) approach to dealing with those selection problems. TE treats input texts as premises (P), options as hypotheses (H), then handles the selection problem by modeling (P, H) pairwise. Two limitations: first, the pairwise modeling is unaware of other options, which is less intuitive since humans often determine the best options by comparing competing candidates; second, the inference process of pairwise TE is time-consuming, especially when the option space is large. To deal with the two issues, this work first proposes a contextualized TE model (Context-TE) by appending other k options as the context of the current (P, H) modeling. Context-TE is able to learn more reliable decision for the H since it considers various context. Second, we speed up Context-TE by coming up with Parallel-TE, which learns the decisions of multiple options simultaneously. Parallel-TE significantly improves the inference speed while keeping comparable performance with Context-TE. Our methods are evaluated on three tasks (ultra-fine entity typing, intent detection and multi-choice QA) that are typical selection problems with different sizes of options. Experiments show our models set new SOTA performance; particularly, Parallel-TE is faster than the pairwise TE by k times in inference. Our code is publicly available at https://github.com/jiangshdd/LearningToSelect.


page 1

page 2

page 3

page 4


Leveraging Large Language Models for Multiple Choice Question Answering

While large language models (LLMs) like GPT-3 have achieved impressive r...

Compositional Exemplars for In-context Learning

Large pretrained language models (LMs) have shown impressive In-Context ...

Learning to Find Common Objects Across Image Collections

We address the problem of finding a set of images containing a common, b...

ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions

The task of Reading Comprehension with Multiple Choice Questions, requir...

LingMess: Linguistically Informed Multi Expert Scorers for Coreference Resolution

While coreference resolution typically involves various linguistic chall...

Large Language Models Sensitivity to The Order of Options in Multiple-Choice Questions

Large Language Models (LLMs) have demonstrated remarkable capabilities i...

On Large Language Models' Selection Bias in Multi-Choice Questions

Multi-choice questions (MCQs) serve as a common yet important task forma...

Please sign up or login with your details

Forgot password? Click here to reset