NeuralDPS: Neural Deterministic Plus Stochastic Model with Multiband Excitation for Noise-Controllable Waveform Generation

03/05/2022
by   PetsTime, et al.
8

The traditional vocoders have the advantages of high synthesis efficiency, strong interpretability, and speech editability, while the neural vocoders have the advantage of high synthesis quality. To combine the advantages of two vocoders, inspired by the traditional deterministic plus stochastic model, this paper proposes a novel neural vocoder named NeuralDPS which can retain high speech quality and acquire high synthesis efficiency and noise controllability. Firstly, this framework contains four modules: a deterministic source module, a stochastic source module, a neural V/UV decision module and a neural filter module. The input required by the vocoder is just the spectral parameter, which avoids the error caused by estimating additional parameters, such as F0. Secondly, to solve the problem that different frequency bands may have different proportions of deterministic components and stochastic components, a multiband excitation strategy is used to generate a more accurate excitation signal and reduce the neural filter's burden. Thirdly, a method to control noise components of speech is proposed. In this way, the signal-to-noise ratio (SNR) of speech can be adjusted easily. Objective and subjective experimental results show that our proposed NeuralDPS vocoder can obtain similar performance with the WaveNet and it generates waveforms at least 280 times faster than the WaveNet vocoder. It is also 28 single CPU core. We have also verified through experiments that this method can effectively control the noise components in the predicted speech and adjust the SNR of speech. Examples of generated speech can be found at https://hairuo55.github.io/NeuralDPS.

READ FULL TEXT

page 1

page 8

page 10

page 11

page 12

page 14

research
08/27/2019

Neural Harmonic-plus-Noise Waveform Model with Trainable Maximum Voice Frequency for Text-to-Speech Synthesis

Neural source-filter (NSF) models are deep neural networks that produce ...
research
10/29/2018

Neural source-filter-based waveform model for statistical parametric speech synthesis

Neural waveform models such as the WaveNet are used in many recent text-...
research
12/29/2019

The Deterministic plus Stochastic Model of the Residual Signal and its Applications

The modeling of speech production often relies on a source-filter approa...
research
06/07/2020

Parametric Representation for Singing Voice Synthesis: a Comparative Evaluation

Various parametric representations have been proposed to model the speec...
research
11/21/2022

Embedding a Differentiable Mel-cepstral Synthesis Filter to a Neural Speech Synthesis System

This paper integrates a classic mel-cepstral synthesis filter into a mod...
research
12/29/2019

A Deterministic plus Stochastic Model of the Residual Signal for Improved Parametric Speech Synthesis

Speech generated by parametric synthesizers generally suffers from a typ...
research
07/25/2023

CQNV: A combination of coarsely quantized bitstream and neural vocoder for low rate speech coding

Recently, speech codecs based on neural networks have proven to perform ...

Please sign up or login with your details

Forgot password? Click here to reset