Automatic Severity Assessment of Dysarthric speech by using Self-supervised Model with Multi-task Learning

10/27/2022
by   Eun Jung Yeo, et al.
0

Automatic assessment of dysarthric speech is essential for sustained treatments and rehabilitation. However, obtaining atypical speech is challenging, often leading to data scarcity issues. To tackle the problem, we propose a novel automatic severity assessment method for dysarthric speech, using the self-supervised model in conjunction with multi-task learning. Wav2vec 2.0 XLS-R is jointly trained for two different tasks: severity level classification and an auxilary automatic speech recognition (ASR). For the baseline experiments, we employ hand-crafted features such as eGeMaps and linguistic features, and SVM, MLP, and XGBoost classifiers. Explored on the Korean dysarthric speech QoLT database, our model outperforms the traditional baseline methods, with a relative percentage increase of 4.79 classification accuracy. In addition, the proposed model surpasses the model trained without ASR head, achieving 10.09 Furthermore, we present how multi-task learning affects the severity classification performance by analyzing the latent representations and regularization effect.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset