Context-Free Transductions with Neural Stacks

09/08/2018
by   Yiding Hao, et al.
0

This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory.

READ FULL TEXT
research
10/09/2020

Learning Context-Free Languages with Nondeterministic Stack RNNs

We present a differentiable stack data structure that simultaneously and...
research
05/04/2021

Reservoir Stack Machines

Memory-augmented neural networks equip a recurrent neural network with a...
research
07/01/2019

Understanding Memory Modules on Learning Simple Algorithms

Recent work has shown that memory modules are crucial for the generaliza...
research
07/05/2022

Neural Networks and the Chomsky Hierarchy

Reliable generalization lies at the heart of safe ML and AI. However, un...
research
11/08/2019

Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages

We introduce three memory-augmented Recurrent Neural Networks (MARNNs) a...
research
06/04/2019

Finding Syntactic Representations in Neural Stacks

Neural network architectures have been augmented with differentiable sta...
research
05/20/2020

Tracking Measurement Obfuscations from SourceURL

Tracking scripts can use the sourceURL directive to mask their origin fr...

Please sign up or login with your details

Forgot password? Click here to reset