The Conversation: Deep Audio-Visual Speech Enhancement

04/11/2018
by   Triantafyllos Afouras, et al.
0

Our goal is to isolate individual speakers from multi-talker simultaneous speech in videos. Existing works in this area have focussed on trying to separate utterances from known speakers in controlled environments. In this paper, we propose a deep audio-visual speech enhancement network that is able to separate a speaker's voice given lip regions in the corresponding video, by predicting both the magnitude and the phase of the target signal. The method is applicable to speakers unheard and unseen during training, and for unconstrained environments. We demonstrate strong quantitative and qualitative results, isolating extremely challenging real-world examples.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset