Slower is Better: Revisiting the Forgetting Mechanism in LSTM for Slower Information Decay

05/12/2021
by   Hsiang-Yun Sherry Chien, et al.
0

Sequential information contains short- to long-range dependencies; however, learning long-timescale information has been a challenge for recurrent neural networks. Despite improvements in long short-term memory networks (LSTMs), the forgetting mechanism results in the exponential decay of information, limiting their capacity to capture long-timescale information. Here, we propose a power law forget gate, which instead learns to forget information along a slower power law decay function. Specifically, the new gate learns to control the power law decay factor, p, allowing the network to adjust the information decay rate according to task demands. Our experiments show that an LSTM with power law forget gates (pLSTM) can effectively capture long-range dependencies beyond hundreds of elements on image classification, language modeling, and categorization tasks, improving performance over the vanilla LSTM. We also inspected the revised forget gate by varying the initialization of p, setting p to a fixed value, and ablating cells in the pLSTM network. The results show that the information decay can be controlled by the learnable decay factor p, which allows pLSTM to achieve its superior performance. Altogether, we found that LSTM with the proposed forget gate can learn long-term dependencies, outperforming other recurrent networks in multiple domains; such gating mechanism can be integrated into other architectures for improving the learning of long timescale information in recurrent neural networks.

READ FULL TEXT

page 13

page 15

research
03/23/2018

Can recurrent neural networks warp time?

Successful recurrent models such as long short-term memories (LSTMs) and...
research
04/25/2014

Input anticipating critical reservoirs show power law forgetting of unexpected input events

Usually, reservoir computing shows an exponential memory decay. This pap...
research
10/22/2019

Improving the Gating Mechanism of Recurrent Neural Networks

Gating mechanisms are widely used in neural network models, where they a...
research
07/27/2018

On the Inability of Markov Models to Capture Criticality in Human Mobility

We examine the non-Markovian nature of human mobility by exposing the in...
research
08/31/2021

Working Memory Connections for LSTM

Recurrent Neural Networks with Long Short-Term Memory (LSTM) make use of...
research
03/08/2017

Interpretable Structure-Evolving LSTM

This paper develops a general framework for learning interpretable data ...
research
02/28/2018

Predicting Recall Probability to Adaptively Prioritize Study

Students have a limited time to study and are typically ineffective at a...

Please sign up or login with your details

Forgot password? Click here to reset