EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model

by   Xinya Ji, et al.
SenseTime Corporation
Nanjing University
The University of Sydney
Monash University
The Chinese University of Hong Kong

Although significant progress has been made to audio-driven talking face generation, existing methods either neglect facial emotion or cannot be applied to arbitrary subjects. In this paper, we propose the Emotion-Aware Motion Model (EAMM) to generate one-shot emotional talking faces by involving an emotion source video. Specifically, we first propose an Audio2Facial-Dynamics module, which renders talking faces from audio-driven unsupervised zero- and first-order key-points motion. Then through exploring the motion model's properties, we further propose an Implicit Emotion Displacement Learner to represent emotion-related facial dynamics as linearly additive displacements to the previously acquired motion representations. Comprehensive experiments demonstrate that by incorporating the results from both modules, our method can generate satisfactory talking face results on arbitrary subjects with realistic emotion patterns.


page 1

page 3

page 4

page 6

page 8


Emotion-Controllable Generalized Talking Face Generation

Despite the significant progress in recent years, very few of the AI-bas...

High-fidelity Generalized Emotional Talking Face Generation with Multi-modal Emotion Space Learning

Recently, emotional talking face generation has received considerable at...

Efficient Emotional Adaptation for Audio-Driven Talking-Head Generation

Audio-driven talking-head synthesis is a popular research topic for virt...

Audio-Driven Emotional Video Portraits

Despite previous success in generating audio-driven talking heads, most ...

READ Avatars: Realistic Emotion-controllable Audio Driven Avatars

We present READ Avatars, a 3D-based approach for generating 2D avatars t...

Parametric Implicit Face Representation for Audio-Driven Facial Reenactment

Audio-driven facial reenactment is a crucial technique that has a range ...

Exploring speaker enrolment for few-shot personalisation in emotional vocalisation prediction

In this work, we explore a novel few-shot personalisation architecture f...

Please sign up or login with your details

Forgot password? Click here to reset