Surprisal-Driven Zoneout

10/24/2016
by   Kamil Rocki, et al.
0

We propose a novel method of regularization for recurrent neural networks called suprisal-driven zoneout. In this method, states zoneout (maintain their previous value rather than updating), when the suprisal (discrepancy between the last state's prediction and target) is small. Thus regularization is adaptive and input-driven on a per-neuron basis. We demonstrate the effectiveness of this idea by achieving state-of-the-art bits per character of 1.31 on the Hutter Prize Wikipedia dataset, significantly reducing the gap to the best known highly-engineered compression methods.

READ FULL TEXT
research
05/24/2017

Fast-Slow Recurrent Neural Networks

Processing sequential data of variable length is a major challenge in a ...
research
08/22/2016

Surprisal-Driven Feedback in Recurrent Networks

Recurrent neural nets are widely used for predicting temporal data. Thei...
research
11/18/2019

RotationOut as a Regularization Method for Neural Network

In this paper, we propose a novel regularization method, RotationOut, fo...
research
11/04/2021

Recurrent Neural Network Training with Convex Loss and Regularization Functions by Extended Kalman Filtering

We investigate the use of extended Kalman filtering to train recurrent n...
research
03/29/2017

Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

We propose a method for lossy image compression based on recurrent, conv...
research
11/19/2015

Variable Rate Image Compression with Recurrent Neural Networks

A large fraction of Internet traffic is now driven by requests from mobi...
research
05/09/2016

Efficiency Evaluation of Character-level RNN Training Schedules

We present four training and prediction schedules from the same characte...

Please sign up or login with your details

Forgot password? Click here to reset