Exploring Architectures, Data and Units For Streaming End-to-End Speech Recognition with RNN-Transducer

01/02/2018
by   Kanishka Rao, et al.
0

We investigate training end-to-end speech recognition models with the recurrent neural network transducer (RNN-T): a streaming, all-neural, sequence-to-sequence architecture which jointly learns acoustic and language model components from transcribed acoustic data. We explore various model architectures and demonstrate how the model can be improved further if additional text or pronunciation data are available. The model consists of an `encoder', which is initialized from a connectionist temporal classification-based (CTC) acoustic model, and a `decoder' which is partially initialized from a recurrent neural network language model trained on text data alone. The entire neural network is trained with the RNN-T loss and directly outputs the recognized transcript as a sequence of graphemes, thus performing end-to-end speech recognition. We find that performance can be improved further through the use of sub-word units (`wordpieces') which capture longer context and significantly reduce substitution errors. The best RNN-T system, a twelve-layer LSTM encoder with a two-layer LSTM decoder trained with 30,000 wordpieces as output targets achieves a word error rate of 8.5% on voice-search and 5.2% on voice-dictation tasks and is comparable to a state-of-the-art baseline at 8.3% on voice-search and 5.4% voice-dictation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/05/2017

State-of-the-art Speech Recognition With Sequence-to-Sequence Models

Attention-based encoder-decoder architectures such as Listen, Attend, an...
research
10/31/2016

Neural Speech Recognizer: Acoustic-to-Word LSTM Model for Large Vocabulary Speech Recognition

We present results that show it is possible to build a competitive, grea...
research
10/26/2017

Streaming Small-Footprint Keyword Spotting using Sequence-to-Sequence Models

We develop streaming keyword spotting systems using a recurrent neural n...
research
01/25/2022

Improving the fusion of acoustic and text representations in RNN-T

The recurrent neural network transducer (RNN-T) has recently become the ...
research
08/05/2015

Listen, Attend and Spell

We present Listen, Attend and Spell (LAS), a neural network that learns ...
research
08/07/2015

An End-to-End Neural Network for Polyphonic Piano Music Transcription

We present a supervised neural network model for polyphonic piano music ...
research
10/25/2016

Sequence Segmentation Using Joint RNN and Structured Prediction Models

We describe and analyze a simple and effective algorithm for sequence se...

Please sign up or login with your details

Forgot password? Click here to reset