Reliable Natural Language Understanding with Large Language Models and Answer Set Programming

by   Abhiramon Rajasekharan, et al.

Humans understand language by extracting information (meaning) from sentences, combining it with existing commonsense knowledge, and then performing reasoning to draw conclusions. While large language models (LLMs) such as GPT-3 and ChatGPT are able to leverage patterns in the text to solve a variety of NLP tasks, they fall short in problems that require reasoning. They also cannot reliably explain the answers generated for a given question. In order to emulate humans better, we propose STAR, a framework that combines LLMs with Answer Set Programming (ASP). We show how LLMs can be used to effectively extract knowledge – represented as predicates – from language. Goal-directed ASP is then employed to reliably reason over this knowledge. We apply the STAR framework to three different NLU tasks requiring reasoning: qualitative reasoning, mathematical reasoning, and goal-directed conversation. Our experiments reveal that STAR is able to bridge the gap of reasoning in NLU tasks, leading to significant performance improvements, especially for smaller LLMs, i.e., LLMs with a smaller number of parameters. NLU applications developed using the STAR framework are also explainable: along with the predicates generated, a justification in the form of a proof tree can be produced for a given output.


page 1

page 2

page 3

page 4


An ASP-based Approach to Answering Natural Language Questions for Texts

An approach based on answer set programming (ASP) is proposed in this pa...

CO-STAR: Conceptualisation of Stereotypes for Analysis and Reasoning

Warning: this paper contains material which may be offensive or upsettin...

Automated Interactive Domain-Specific Conversational Agents that Understand Human Dialogs

Achieving human-like communication with machines remains a classic, chal...

SQuARE: Semantics-based Question Answering and Reasoning Engine

Understanding the meaning of a text is a fundamental challenge of natura...

No Train Still Gain. Unleash Mathematical Reasoning of Large Language Models with Monte Carlo Tree Search Guided by Energy Function

Large language models (LLMs) exhibit impressive language understanding a...

A Generalised Approach for Encoding and Reasoning with Qualitative Theories in Answer Set Programming

Qualitative reasoning involves expressing and deriving knowledge based o...

BoardgameQA: A Dataset for Natural Language Reasoning with Contradictory Information

Automated reasoning with unstructured natural text is a key requirement ...

Please sign up or login with your details

Forgot password? Click here to reset