On Neural Architecture Inductive Biases for Relational Tasks

by   Giancarlo Kerg, et al.

Current deep learning approaches have shown good in-distribution generalization performance, but struggle with out-of-distribution generalization. This is especially true in the case of tasks involving abstract relations like recognizing rules in sequences, as we find in many intelligence tests. Recent work has explored how forcing relational representations to remain distinct from sensory representations, as it seems to be the case in the brain, can help artificial systems. Building on this work, we further explore and formalize the advantages afforded by 'partitioned' representations of relations and sensory details, and how this inductive bias can help recompose learned relational structure in newly encountered settings. We introduce a simple architecture based on similarity scores which we name Compositional Relational Network (CoRelNet). Using this model, we investigate a series of inductive biases that ensure abstract relations are learned and represented distinctly from sensory data, and explore their effects on out-of-distribution generalization for a series of relational psychophysics tasks. We find that simple architectural choices can outperform existing models in out-of-distribution generalization. Together, these results show that partitioning relational representations from other information streams may be a simple way to augment existing network architectures' robustness when performing out-of-distribution relational computations.


page 7

page 8

page 16

page 17

page 18

page 19

page 21

page 22


Relational inductive biases, deep learning, and graph networks

Artificial intelligence (AI) has undergone a renaissance recently, makin...

Systematic Visual Reasoning through Object-Centric Relational Abstraction

Human visual reasoning is characterized by an ability to identify abstra...

Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning

Although reinforcement learning has been successfully applied in many do...

Generalisable Relational Reasoning With Comparators in Low-Dimensional Manifolds

While modern deep neural architectures generalise well when test data is...

The Relational Bottleneck as an Inductive Bias for Efficient Abstraction

A central challenge for cognitive science is to explain how abstract con...

Relational inductive bias for physical construction in humans and machines

While current deep learning systems excel at tasks such as object classi...

Disentangled Relational Representations for Explaining and Learning from Demonstration

Learning from demonstration is an effective method for human users to in...

Please sign up or login with your details

Forgot password? Click here to reset