A Data-Driven Investigation of Noise-Adaptive Utterance Generation with Linguistic Modification

10/19/2022
by   Anupama Chingacham, et al.
0

In noisy environments, speech can be hard to understand for humans. Spoken dialog systems can help to enhance the intelligibility of their output, either by modifying the speech synthesis (e.g., imitate Lombard speech) or by optimizing the language generation. We here focus on the second type of approach, by which an intended message is realized with words that are more intelligible in a specific noisy environment. By conducting a speech perception experiment, we created a dataset of 900 paraphrases in babble noise, perceived by native English speakers with normal hearing. We find that careful selection of paraphrases can improve intelligibility by 33 the data shows that the intelligibility differences between paraphrases are mainly driven by noise-robust acoustic cues. Furthermore, we propose an intelligibility-aware paraphrase ranking model, which outperforms baseline models with a relative improvement of 31.37

READ FULL TEXT
research
10/26/2020

Effect of Language Proficiency on Subjective Evaluation of Noise Suppression Algorithms

Speech communication systems based on Voice-over-IP technology are frequ...
research
10/07/2021

Applying Phonological Features in Multilingual Text-To-Speech

This study investigates whether phonological features can be applied in ...
research
07/18/2021

Exploring the Potential of Lexical Paraphrases for Mitigating Noise-Induced Comprehension Errors

Listening in noisy environments can be difficult even for individuals wi...
research
12/08/2022

DDSupport: Language Learning Support System that Displays Differences and Distances from Model Speech

When beginners learn to speak a non-native language, it is difficult for...
research
04/07/2022

Linguistic-Acoustic Similarity Based Accent Shift for Accent Recognition

General accent recognition (AR) models tend to directly extract low-leve...
research
06/08/2021

Optimising Hearing Aid Fittings for Speech in Noise with a Differentiable Hearing Loss Model

Current hearing aids normally provide amplification based on a general p...
research
06/05/2023

A Novel Interpretable and Generalizable Re-synchronization Model for Cued Speech based on a Multi-Cuer Corpus

Cued Speech (CS) is a multi-modal visual coding system combining lip rea...

Please sign up or login with your details

Forgot password? Click here to reset