Diffusion Video Autoencoders: Toward Temporally Consistent Face Video Editing via Disentangled Video Encoding

12/06/2022
by   Gyeongman Kim, et al.
0

Inspired by the impressive performance of recent face image editing methods, several studies have been naturally proposed to extend these methods to the face video editing task. One of the main challenges here is temporal consistency among edited frames, which is still unresolved. To this end, we propose a novel face video editing framework based on diffusion autoencoders that can successfully extract the decomposed features - for the first time as a face video editing model - of identity and motion from a given video. This modeling allows us to edit the video by simply manipulating the temporally invariant feature to the desired direction for the consistency. Another unique strength of our model is that, since our model is based on diffusion models, it can satisfy both reconstruction and edit capabilities at the same time, and is robust to corner cases in wild face videos (e.g. occluded faces) unlike the existing GAN-based methods.

READ FULL TEXT

page 1

page 4

page 6

page 8

page 12

page 13

page 14

page 15

research
07/03/2020

Task-agnostic Temporally Consistent Facial Video Editing

Recent research has witnessed the advances in facial image editing tasks...
research
08/18/2023

StableVideo: Text-driven Consistency-aware Diffusion Video Editing

Diffusion-based methods can generate realistic images and videos, but th...
research
10/05/2021

FacialFilmroll: High-resolution multi-shot video editing

We present FacialFilmroll, a solution for spatially and temporally consi...
research
05/05/2022

Parametric Reshaping of Portraits in Videos

Sharing short personalized videos to various social media networks has b...
research
08/12/2021

UniFaceGAN: A Unified Framework for Temporally Consistent Facial Video Editing

Recent research has witnessed advances in facial image editing tasks inc...
research
01/10/2023

Speech Driven Video Editing via an Audio-Conditioned Diffusion Model

In this paper we propose a method for end-to-end speech driven video edi...
research
03/28/2023

VIVE3D: Viewpoint-Independent Video Editing using 3D-Aware GANs

We introduce VIVE3D, a novel approach that extends the capabilities of i...

Please sign up or login with your details

Forgot password? Click here to reset