UFO: Unified Fact Obtaining for Commonsense Question Answering

05/25/2023
by   Zhifeng Li, et al.
0

Leveraging external knowledge to enhance the reasoning ability is crucial for commonsense question answering. However, the existing knowledge bases heavily rely on manual annotation which unavoidably causes deficiency in coverage of world-wide commonsense knowledge. Accordingly, the knowledge bases fail to be flexible enough to support the reasoning over diverse questions. Recently, large-scale language models (LLMs) have dramatically improved the intelligence in capturing and leveraging knowledge, which opens up a new way to address the issue of eliciting knowledge from language models. We propose a Unified Facts Obtaining (UFO) approach. UFO turns LLMs into knowledge sources and produces relevant facts (knowledge statements) for the given question. We first develop a unified prompt consisting of demonstrations that cover different aspects of commonsense and different question styles. On this basis, we instruct the LLMs to generate question-related supporting facts for various commonsense questions via prompting. After facts generation, we apply a dense retrieval-based fact selection strategy to choose the best-matched fact. This kind of facts will be fed into the answer inference model along with the question. Notably, due to the design of unified prompts, UFO can support reasoning in various commonsense aspects (including general commonsense, scientific commonsense, and social commonsense). Extensive experiments on CommonsenseQA 2.0, OpenBookQA, QASC, and Social IQA benchmarks show that UFO significantly improves the performance of the inference model and outperforms manually constructed knowledge sources.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2022

CIKQA: Learning Commonsense Inference with a Unified Knowledge-in-the-loop QA Paradigm

Recently, the community has achieved substantial progress on many common...
research
05/28/2021

Alleviating the Knowledge-Language Inconsistency: A Study for Deep Commonsense Knowledge

Knowledge facts are typically represented by relational triples, while w...
research
07/02/2020

Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge

Massive language models are the core of modern NLP modeling and have bee...
research
10/02/2022

Does Wikidata Support Analogical Reasoning?

Analogical reasoning methods have been built over various resources, inc...
research
05/15/2023

KEPR: Knowledge Enhancement and Plausibility Ranking for Generative Commonsense Question Answering

Generative commonsense question answering (GenCQA) is a task of automati...
research
09/12/2023

Answering Subjective Induction Questions on Products by Summarizing Multi-sources Multi-viewpoints Knowledge

This paper proposes a new task in the field of Answering Subjective Indu...
research
10/31/2022

Lila: A Unified Benchmark for Mathematical Reasoning

Mathematical reasoning skills are essential for general-purpose intellig...

Please sign up or login with your details

Forgot password? Click here to reset