Examining the Emergence of Deductive Reasoning in Generative Language Models

05/31/2023
by   Peter Belcak, et al.
0

We conduct a preliminary inquiry into the ability of generative transformer models to deductively reason from premises provided. We observe notable differences in the performance of models coming from different training setups and find that the deductive reasoning ability increases with scale. Further, we discover that the performance generally does not decrease with the length of the deductive chain needed to reach the conclusion, with the exception of OpenAI GPT-3 and GPT-3.5 models. Our study considers a wide variety of transformer-decoder models, ranging from 117 million to 175 billion parameters in size.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/29/2022

Developmental Negation Processing in Transformer Language Models

Reasoning using negation is known to be difficult for transformer-based ...
research
05/10/2023

RECKONING: Reasoning through Dynamic Knowledge Encoding

Recent studies on transformer-based language models show that they can a...
research
05/24/2022

TALM: Tool Augmented Language Models

Transformer based language models (LMs) demonstrate increasing performan...
research
07/30/2021

Structural Guidance for Transformer Language Models

Transformer-based language models pre-trained on large amounts of text d...
research
05/23/2023

Concept-aware Training Improves In-context Learning Ability of Language Models

Many recent language models (LMs) of Transformers family exhibit so-call...
research
06/01/2023

Exposing Attention Glitches with Flip-Flop Language Modeling

Why do large language models sometimes output factual inaccuracies and e...
research
08/28/2023

Bayesian artificial brain with ChatGPT

This paper aims to investigate the mathematical problem-solving capabili...

Please sign up or login with your details

Forgot password? Click here to reset