Transformer Transducer: A Streamable Speech Recognition Model with Transformer Encoders and RNN-T Loss

02/07/2020
by   Qian Zhang, et al.
0

In this paper we present an end-to-end speech recognition model with Transformer encoders that can be used in a streaming speech recognition system. Transformer computation blocks based on self-attention are used to encode both audio and label sequences independently. The activations from both audio and label encoders are combined with a feed-forward layer to compute a probability distribution over the label space for every combination of acoustic frame position and label history. This is similar to the Recurrent Neural Network Transducer (RNN-T) model, which uses RNNs for information encoding instead of Transformer encoders. The model is trained with a monotonic RNN-T loss well-suited to frame-synchronous, streaming decoding. We present results on the LibriSpeech dataset showing that limiting the left context for self-attention in the Transformer layers makes decoding computationally tractable for streaming, with only a slight degradation in accuracy. We also show that the full attention version of our model achieves competitive performance compared to existing LibriSpeech benchmarks for attention-based models trained with cross-entropy loss. Our results also show that we can bridge the gap between full attention and limited attention versions of our model by attending to a limited number of future frames.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset