Modelling Identity Rules with Neural Networks

12/06/2018
by   Tillman Weyde, et al.
0

In this paper, we show that standard feed-forward and recurrent neural networks fail to learn abstract patterns based on identity rules. We propose Repetition Based Pattern (RBP) extensions to neural network structures that solve this problem and answer, as well as raise, questions about integrating structures for inductive bias into neural networks. Examples of abstract patterns are the sequence patterns ABA and ABB where A or B can be any object. These were introduced by Marcus et al (1999) who also found that 7 month old infants recognise these patterns in sequences that use an unfamiliar vocabulary while simple recurrent neural networks do not.This result has been contested in the literature but it is confirmed by our experiments. We also show that the inability to generalise extends to different, previously untested, settings. We propose a new approach to modify standard neural network architectures, called Repetition Based Patterns (RBP) with different variants for classification and prediction. Our experiments show that neural networks with the appropriate RBP structure achieve perfect classification and prediction performance on synthetic data, including mixed concrete and abstract patterns. RBP also improves neural network performance in experiments with real-world sequence prediction tasks. We discuss these finding in terms of challenges for neural network models and identify consequences from this result in terms of developing inductive biases for neural network learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/23/2017

Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation

A recurrent neural network model of phonological pattern learning is pro...
research
03/10/2021

Relational Weight Priors in Neural Networks for Abstract Pattern Learning and Language Modelling

Deep neural networks have become the dominant approach in natural langua...
research
12/04/2018

Feed-Forward Neural Networks Need Inductive Bias to Learn Equality Relations

Basic binary relations such as equality and inequality are fundamental t...
research
11/17/2015

Gated Graph Sequence Neural Networks

Graph-structured data appears frequently in domains including chemistry,...
research
06/13/2019

Factors for the Generalisation of Identity Relations by Neural Networks

Many researchers implicitly assume that neural networks learn relations ...
research
02/08/2018

Learning Inductive Biases with Simple Neural Networks

People use rich prior knowledge about the world in order to efficiently ...
research
03/06/2020

Weight Priors for Learning Identity Relations

Learning abstract and systematic relations has been an open issue in neu...

Please sign up or login with your details

Forgot password? Click here to reset