Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

05/27/2023
by   Yuhui Zhang, et al.
0

Language models have been shown to exhibit positive scaling, where performance improves as models are scaled up in terms of size, compute, or data. In this work, we introduce NeQA, a dataset consisting of questions with negation in which language models do not exhibit straightforward positive scaling. We show that this task can exhibit inverse scaling, U-shaped scaling, or positive scaling, and the three scaling trends shift in this order as we use more powerful prompting methods or model families. We hypothesize that solving NeQA depends on two subtasks: question answering (task 1) and negation understanding (task 2). We find that task 1 has linear scaling, while task 2 has sigmoid-shaped scaling with an emergent transition point, and composing these two scaling trends yields the final scaling trend of NeQA. Our work reveals and provides a way to analyze the complex scaling trends of language models.

READ FULL TEXT
research
11/03/2022

Inverse scaling can become U-shaped

Although scaling language models improves performance on a range of task...
research
06/15/2023

Inverse Scaling: When Bigger Isn't Better

Work on scaling laws has found that large language models (LMs) show pre...
research
05/24/2023

Emergent inabilities? Inverse scaling over the course of pretraining

Does inverse scaling only occur as a function of model parameter size, o...
research
12/16/2022

'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers

Language Models appear to perform poorly on quantification. We ask how b...
research
10/19/2022

Language Models Understand Us, Poorly

Some claim language models understand us. Others won't hear it. To clari...
research
12/04/2022

Understanding How Model Size Affects Few-shot Instruction Prompting

Large Language Models are affected by the phenomena of memorizing and fo...
research
05/24/2023

The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python

Large Language Models (LLMs) have successfully been applied to code gene...

Please sign up or login with your details

Forgot password? Click here to reset