Replacing Human Audio with Synthetic Audio for On-device Unspoken Punctuation Prediction

10/20/2020
by   Daria Soboleva, et al.
30

We present a novel multi-modal unspoken punctuation prediction system for the English language which combines acoustic and text features. We demonstrate for the first time, that by relying exclusively on synthetic data generated using a prosody-aware text-to-speech system, we can outperform a model trained with expensive human audio recordings on the unspoken punctuation prediction problem. Our model architecture is well suited for on-device use. This is achieved by leveraging hash-based embeddings of automatic speech recognition text output in conjunction with acoustic features as input to a quasi-recurrent neural network, keeping the model size small and latency low.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset