Language Models with Rationality

05/23/2023
by   Nora Kassner, et al.
0

While large language models (LLMs) are proficient at question-answering (QA), the dependencies between their answers and other "beliefs" they may have about the world are typically unstated, and may even be in conflict. Our goal is to uncover such dependencies and reduce inconsistencies among them, so that answers are supported by faithful, system-believed chains of reasoning drawn from a consistent network of beliefs. Our approach, which we call REFLEX, is to add a "rational", self-reflecting layer on top of the LLM. First, given a question, we construct a belief graph using a backward-chaining process to materialize relevant model "beliefs" (including beliefs about answer candidates) and the inferential relationships between them. Second, we identify and minimize contradictions in that graph using a formal constraint reasoner. We find that REFLEX significantly improves consistency (by 8 without harming overall answer accuracy, resulting in answers supported by faithful chains of reasoning drawn from a more consistent belief system. This suggests a new style of system architecture, in which an LLM extended with a rational layer of self-reflection can repair latent inconsistencies within the LLM alone.

READ FULL TEXT

page 3

page 8

research
10/21/2022

Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning

Our goal is a question-answering (QA) system that can show how its answe...
research
04/27/2022

Towards Teachable Reasoning Systems

Our goal is a teachable reasoning system for question-answering (QA), wh...
research
04/16/2021

Enriching a Model's Notion of Belief using a Persistent Memory

Although pretrained language models (PTLMs) have been shown to contain s...
research
09/29/2021

BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

Although pretrained language models (PTLMs) contain significant amounts ...
research
11/21/2022

Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference

While large pre-trained language models are powerful, their predictions ...
research
03/20/2015

The RatioLog Project: Rational Extensions of Logical Reasoning

Higher-level cognition includes logical reasoning and the ability of que...
research
01/13/2023

The moral authority of ChatGPT

ChatGPT is not only fun to chat with, but it also searches information, ...

Please sign up or login with your details

Forgot password? Click here to reset