Recurrent Neural Network-Based Semantic Variational Autoencoder for Sequence-to-Sequence Learning

02/09/2018
by   Myeongjun Jang, et al.
0

Sequence-to-sequence (Seq2seq) models have played an import role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE) alleviates this problem by learning a continuous semantic space of the input sentence. However, it does not solve the problem completely. In this paper, we propose a new recurrent neural network (RNN)-based Seq2seq model, RNN semantic variational autoencoder (RNN--SVAE), to better capture the global latent information of a sequence of words. To consider the words in a sentence equally, without regard to its position within the sentence, we construct a document information vector using the attention information between the final state of the encoder and every prior hidden state. Then, we combine this document information vector with the final hidden state of the bi-directional RNN encoder to construct the global latent vector, which becomes the output of the encoder part. Then, the mean and standard deviation of the continuous semantic space are learned to take advantage of the variational method. Experimental results of three natural language tasks (i.e., language modeling, missing word imputation, paraphrase identification) confirm that the proposed RNN--SVAE yields higher performance than two benchmark models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset