Inverse Scaling: When Bigger Isn't Better

06/15/2023
by   Ian R. McKenzie, et al.
0

Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may show inverse scaling, or worse task performance with increased scale, e.g., due to flaws in the training objective and data. We present empirical evidence of inverse scaling on 11 datasets collected by running a public contest, the Inverse Scaling Prize, with a substantial prize pool. Through analysis of the datasets, along with other examples found in the literature, we identify four potential causes of inverse scaling: (i) preference to repeat memorized sequences over following in-context instructions, (ii) imitation of undesirable patterns in the training data, (iii) tasks containing an easy distractor task which LMs could focus on, rather than the harder real task, and (iv) correct but misleading few-shot demonstrations of the task. We release the winning datasets at https://inversescaling.com/data to allow for further investigation of inverse scaling. Our tasks have helped drive the discovery of U-shaped and inverted-U scaling trends, where an initial trend reverses, suggesting that scaling trends are less reliable at predicting the behavior of larger-scale models than previously understood. Overall, our results suggest that there are tasks for which increased model scale alone may not lead to progress, and that more careful thought needs to go into the data and objectives for training language models.

READ FULL TEXT
research
11/03/2022

Inverse scaling can become U-shaped

Although scaling language models improves performance on a range of task...
research
05/27/2023

Beyond Positive Scaling: How Negation Impacts Scaling Trends of Language Models

Language models have been shown to exhibit positive scaling, where perfo...
research
05/24/2023

Emergent inabilities? Inverse scaling over the course of pretraining

Does inverse scaling only occur as a function of model parameter size, o...
research
12/16/2022

'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers

Language Models appear to perform poorly on quantification. We ask how b...
research
12/04/2022

Understanding How Model Size Affects Few-shot Instruction Prompting

Large Language Models are affected by the phenomena of memorizing and fo...
research
06/26/2023

Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition

As the scale of machine learning models increases, trends such as scalin...
research
05/24/2023

The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python

Large Language Models (LLMs) have successfully been applied to code gene...

Please sign up or login with your details

Forgot password? Click here to reset