Bayesian Sparsification of Recurrent Neural Networks

07/31/2017
by   Ekaterina Lobacheva, et al.
0

Recurrent neural networks show state-of-the-art results in many text analysis tasks but often require a lot of memory to store their weights. Recently proposed Sparse Variational Dropout eliminates the majority of the weights in a feed-forward neural network without significant loss of quality. We apply this technique to sparsify recurrent neural networks. To account for recurrent specifics we also rely on Binary Variational Dropout for RNN. We report 99.5 sparsity level on sentiment analysis task without a quality drop and up to 87 sparsity level on language modeling task with slight loss of accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset