Streaming Multi-talker Speech Recognition with Joint Speaker Identification

04/05/2021
by   Liang Lu, et al.
0

In multi-talker scenarios such as meetings and conversations, speech processing systems are usually required to transcribe the audio as well as identify the speakers for downstream applications. Since overlapped speech is common in this case, conventional approaches usually address this problem in a cascaded fashion that involves speech separation, speech recognition and speaker identification that are trained independently. In this paper, we propose Streaming Unmixing, Recognition and Identification Transducer (SURIT) – a new framework that deals with this problem in an end-to-end streaming fashion. SURIT employs the recurrent neural network transducer (RNN-T) as the backbone for both speech recognition and speaker identification. We validate our idea on the LibrispeechMix dataset – a multi-talker dataset derived from Librispeech, and present encouraging results.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset