Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

by   R. Thomas McCoy, et al.

Humans can learn languages from remarkably little experience. Developing computational models that explain this ability has been a major challenge in cognitive science. Bayesian models that build in strong inductive biases - factors that guide generalization - have been successful at explaining how humans might generalize from few examples in controlled settings but are usually too restrictive to be tractably applied to more naturalistic data. By contrast, neural networks have flexible representations that allow them to learn well from naturalistic data but require many more examples than humans receive. We show that learning from limited naturalistic data is possible with an approach that combines the strong inductive biases of a Bayesian model with the flexible representations of a neural network. This approach works by distilling a Bayesian model's biases into a neural network. Like a Bayesian model, the resulting system can learn formal linguistic patterns from a small number of examples. Like a neural network, it can also learn aspects of English syntax from a corpus of natural language - and it outperforms a standard neural network at acquiring the linguistic phenomena of recursion and priming. Bridging the divide between Bayesian models and neural networks makes it possible to handle a broader range of learning scenarios than either approach can handle on its own.


page 1

page 2

page 3

page 4


Universal linguistic inductive biases via meta-learning

How do learners acquire languages from the limited data available to the...

Predicate learning in neural systems: Discovering latent generative structures

Humans learn complex latent structures from their environments (e.g., na...

The Fast and the Flexible: training neural networks to learn to follow instructions from small data

Learning to follow human instructions is a challenging task because whil...

Neural Network Acceptability Judgments

In this work, we explore the ability of artificial neural networks to ju...

Word-order biases in deep-agent emergent communication

Sequence-processing neural networks led to remarkable progress on many N...

What underlies rapid learning and systematic generalization in humans

Despite the groundbreaking successes of neural networks, contemporary mo...

Speaker Information Can Guide Models to Better Inductive Biases: A Case Study On Predicting Code-Switching

Natural language processing (NLP) models trained on people-generated dat...

Please sign up or login with your details

Forgot password? Click here to reset