Improving Performance of Seen and Unseen Speech Style Transfer in End-to-end Neural TTS

06/18/2021
by   Xiaochun An, et al.
0

End-to-end neural TTS training has shown improved performance in speech style transfer. However, the improvement is still limited by the training data in both target styles and speakers. Inadequate style transfer performance occurs when the trained TTS tries to transfer the speech to a target style from a new speaker with an unknown, arbitrary style. In this paper, we propose a new approach to style transfer for both seen and unseen styles, with disjoint, multi-style datasets, i.e., datasets of different styles are recorded, each individual style is by one speaker with multiple utterances. To encode the style information, we adopt an inverse autoregressive flow (IAF) structure to improve the variational inference. The whole system is optimized to minimize a weighed sum of four different loss functions: 1) a reconstruction loss to measure the distortions in both source and target reconstructions; 2) an adversarial loss to "fool" a well-trained discriminator; 3) a style distortion loss to measure the expected style loss after the transfer; 4) a cycle consistency loss to preserve the speaker identity of the source after the transfer. Experiments demonstrate, both objectively and subjectively, the effectiveness of the proposed approach for seen and unseen style transfer tasks. The performance of the new approach is better and more robust than those of four baseline systems of the prior art.

READ FULL TEXT
research
01/24/2022

Disentangling Style and Speaker Attributes for TTS Style Transfer

End-to-end neural TTS has shown improved performance in speech style tra...
research
08/13/2018

Language Style Transfer from Sentences with Arbitrary Unknown Styles

Language style transfer is the problem of migrating the content of a sou...
research
07/27/2021

Cross-speaker Style Transfer with Prosody Bottleneck in Neural Speech Synthesis

Cross-speaker style transfer is crucial to the applications of multi-sty...
research
04/18/2021

Style-Aware Normalized Loss for Improving Arbitrary Style Transfer

Neural Style Transfer (NST) has quickly evolved from single-style to inf...
research
03/14/2023

Improving Prosody for Cross-Speaker Style Transfer by Semi-Supervised Style Extractor and Hierarchical Modeling in Speech Synthesis

Cross-speaker style transfer in speech synthesis aims at transferring a ...
research
10/25/2019

Multi-Reference Neural TTS Stylization with Adversarial Cycle Consistency

Current multi-reference style transfer models for Text-to-Speech (TTS) p...
research
12/12/2021

Deep Translation Prior: Test-time Training for Photorealistic Style Transfer

Recent techniques to solve photorealistic style transfer within deep con...

Please sign up or login with your details

Forgot password? Click here to reset