Data and knowledge-driven approaches for multilingual training to improve the performance of speech recognition systems of Indian languages

01/24/2022
by   A Madhavaraj, et al.
0

We propose data and knowledge-driven approaches for multilingual training of the automated speech recognition (ASR) system for a target language by pooling speech data from multiple source languages. Exploiting the acoustic similarities between Indian languages, we implement two approaches. In phone/senone mapping, deep neural network (DNN) learns to map senones or phones from one language to the others, and the transcriptions of the source languages are modified such that they can be used along with the target language data to train and fine-tune the target language ASR system. In the other approach, we model the acoustic information for all the languages simultaneously by training a multitask DNN (MTDNN) to predict the senones of each language in different output layers. The cross-entropy loss and the weight update procedure are modified such that only the shared layers and the output layer responsible for predicting the senone classes of a language are updated during training, if the feature vector belongs to that particular language. In the low-resource setting (LRS), 40 hours of transcribed data each for Tamil, Telugu and Gujarati languages are used for training. The DNN based senone mapping technique gives relative improvements in word error rates (WER) of 9.66 the baseline system for Tamil, Gujarati and Telugu languages, respectively. In medium-resourced setting (MRS), 160, 275 and 135 hours of data for Tamil, Kannada and Hindi languages are used, where, the same technique gives better relative improvements of 13.94 Hindi, respectively. The MTDNN with senone mapping based training in LRS, gives higher relative WER improvements of 15.0 Tamil, Gujarati and Telugu, whereas in MRS, we see improvements of 21.24 21.05

READ FULL TEXT

page 11

page 12

research
07/06/2020

Massively Multilingual ASR: 50 Languages, 1 Model, 1 Billion Parameters

We study training a single acoustic model for multiple languages with th...
research
05/28/2022

Adaptive Activation Network For Low Resource Multilingual Speech Recognition

Low resource automatic speech recognition (ASR) is a useful but thorny t...
research
07/07/2022

Investigating the Impact of Cross-lingual Acoustic-Phonetic Similarities on Multilingual Speech Recognition

Multilingual automatic speech recognition (ASR) systems mostly benefit l...
research
08/13/2020

LSTM Acoustic Models Learn to Align and Pronounce with Graphemes

Automated speech recognition coverage of the world's languages continues...
research
06/17/2019

Adversarial Training for Multilingual Acoustic Modeling

Multilingual training has been shown to improve acoustic modeling perfor...
research
11/13/2017

Multilingual Adaptation of RNN Based ASR Systems

A large amount of data is required for automatic speech recognition (ASR...
research
06/19/2018

A Survey of Recent DNN Architectures on the TIMIT Phone Recognition Task

In this survey paper, we have evaluated several recent deep neural netwo...

Please sign up or login with your details

Forgot password? Click here to reset