Face Animation with an Attribute-Guided Diffusion Model

04/06/2023
by   Bohan Zeng, et al.
1

Face animation has achieved much progress in computer vision. However, prevailing GAN-based methods suffer from unnatural distortions and artifacts due to sophisticated motion deformation. In this paper, we propose a Face Animation framework with an attribute-guided Diffusion Model (FADM), which is the first work to exploit the superior modeling capacity of diffusion models for photo-realistic talking-head generation. To mitigate the uncontrollable synthesis effect of the diffusion model, we design an Attribute-Guided Conditioning Network (AGCN) to adaptively combine the coarse animation features and 3D face reconstruction results, which can incorporate appearance and motion conditions into the diffusion process. These specific designs help FADM rectify unnatural artifacts and distortions, and also enrich high-fidelity facial details through iterative diffusion refinements with accurate animation attributes. FADM can flexibly and effectively improve existing animation videos. Extensive experiments on widely used talking-head benchmarks validate the effectiveness of FADM over prior arts.

READ FULL TEXT

page 1

page 3

page 6

page 7

page 8

research
12/27/2022

DiffFace: Diffusion-based Face Swapping with Facial Guidance

In this paper, we propose a diffusion-based face swapping framework for ...
research
09/21/2022

FNeVR: Neural Volume Rendering for Face Animation

Face animation, one of the hottest topics in computer vision, has achiev...
research
11/20/2021

AGA-GAN: Attribute Guided Attention Generative Adversarial Network with U-Net for Face Hallucination

The performance of facial super-resolution methods relies on their abili...
research
05/10/2023

Analyzing Bias in Diffusion-based Face Generation Models

Diffusion models are becoming increasingly popular in synthetic data gen...
research
05/23/2023

CPNet: Exploiting CLIP-based Attention Condenser and Probability Map Guidance for High-fidelity Talking Face Generation

Recently, talking face generation has drawn ever-increasing attention fr...
research
12/13/2022

HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping

Image-based head swapping task aims to stitch a source head to another s...
research
05/04/2023

Multimodal-driven Talking Face Generation, Face Swapping, Diffusion Model

Multimodal-driven talking face generation refers to animating a portrait...

Please sign up or login with your details

Forgot password? Click here to reset