Normalized Features for Improving the Generalization of DNN Based Speech Enhancement

09/07/2017
by   Robert Rehr, et al.
0

Enhancing noisy speech is an important task to restore its quality and to improve its intelligibility. In traditional non-machine-learning (ML) based approaches the parameters required for noise reduction are estimated blindly from the noisy observation while the actual filter functions are derived analytically based on statistical assumptions. Even though such approaches generalize well to many different acoustic conditions, the noise suppression capability in transient noises is low. To amend this shortcoming, machine-learning (ML) methods such as deep learning have been employed for speech enhancement. However, due to their data-driven nature, the generalization of ML based approaches to unknown noise types is still discussed. To improve the generalization of ML based algorithms and to enhance the noise suppression of non-ML based methods, we propose a combination of both approaches. For this, we employ the a priori signal-to-noise ratio (SNR) and the a posteriori SNR estimated as input features in a deep neural network (DNN) based enhancement scheme. We show that this approach allows ML based speech estimators to generalize quickly to unknown noise types even if only few noise conditions have been seen during training. Further, the proposed features outperform a competing approach where an estimate of the noise power spectral density is appended to the noisy spectra. Instrumental measures such as Perceptual Evaluation of Speech Quality (PESQ) and short-time objective intelligibility (STOI) indicate strong improvements in unseen conditions when the proposed features are used. Listening experiments confirm the improved generalization of our proposed combination.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset