Predicate learning in neural systems: Discovering latent generative structures

by   Andrea E. Martin, et al.

Humans learn complex latent structures from their environments (e.g., natural language, mathematics, music, social hierarchies). In cognitive science and cognitive neuroscience, models that infer higher-order structures from sensory or first-order representations have been proposed to account for the complexity and flexibility of human behavior. But how do the structures that these models invoke arise in neural systems in the first place? To answer this question, we explain how a system can learn latent representational structures (i.e., predicates) from experience with wholly unstructured data. During the process of predicate learning, an artificial neural network exploits the naturally occurring dynamic properties of distributed computing across neuronal assemblies in order to learn predicates, but also to combine them compositionally, two computational aspects which appear to be necessary for human behavior as per formal theories in multiple domains. We describe how predicates can be combined generatively using neural oscillations to achieve human-like extrapolation and compositionality in an artificial neural network. The ability to learn predicates from experience, to represent structures compositionally, and to extrapolate to unseen data offers an inroads to understanding and modeling the most complex human behaviors.


page 1

page 2

page 3

page 4


Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

Humans can learn languages from remarkably little experience. Developing...

Artificial neural networks for neuroscientists: A primer

Artificial neural networks (ANNs) are essential tools in machine learnin...

The world seems different in a social context: a neural network analysis of human experimental data

Human perception and behavior are affected by the situational context, i...

A Comprehensive Comparison of Neural Networks as Cognitive Models of Inflection

Neural networks have long been at the center of a debate around the cogn...

Compositional Processing Emerges in Neural Networks Solving Math Problems

A longstanding question in cognitive science concerns the learning mecha...

Deviant Learning Algorithm: Learning Sparse Mismatch Representations through Time and Space

Predictive coding (PDC) has recently attracted attention in the neurosci...

Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner

During their first years of life, infants learn the language(s) of their...

Please sign up or login with your details

Forgot password? Click here to reset