Appearance Composing GAN: A General Method for Appearance-Controllable Human Video Motion Transfer

11/25/2019
by   Dongxu Wei, et al.
19

Due to the rapid development of GANs, there has been significant progress in the field of human video motion transfer which has a wide range of applications in computer vision and graphics. However, existing works only support motion-controllable video synthesis while appearances of different video components are bound together and uncontrollable, which means one person can only appear with the same clothing and background. Besides, most of these works are person-specific and require to train an individual model for each person, which is inflexible and inefficient. Therefore, we propose appearance composing GAN: a general method enabling control over not only human motions but also video appearances for arbitrary human subjects within only one model. The key idea is to exert layout-level appearance control on different video components and fuse them to compose the desired full video scene. Specifically, we achieve such appearance control by providing our model with optimal appearance conditioning inputs obtained separately for each component, allowing controllable component appearance synthesis for different people by changing the input appearance conditions accordingly. In terms of synthesis, a two-stage GAN framework is proposed to sequentially generate the desired body semantic layouts and component appearances, both are consistent with the input human motions and appearance conditions. Coupled with our ACGAN loss and background modulation block, the proposed method can achieve general and appearance-controllable human video motion transfer. Moreover, we build a dataset containing a large number of dance videos for training and evaluation. Experimental results show that, when applied to motion transfer tasks involving a variety of human subjects, our proposed method achieves appearance-controllable synthesis with higher video quality than state-of-arts based on only one-time training.

READ FULL TEXT

page 2

page 4

page 6

page 7

research
11/25/2019

GAC-GAN: A General Method for Appearance-Controllable Human Video Motion Transfer

Human video motion transfer has a wide range of applications in multimed...
research
04/17/2019

Vid2Game: Controllable Characters Extracted from Real-World Videos

We are given a video of a person performing a certain activity, from whi...
research
02/22/2021

Style and Pose Control for Image Synthesis of Humans from a Single Monocular View

Photo-realistic re-rendering of a human from a single image with explici...
research
09/16/2020

Layered Neural Rendering for Retiming People in Video

We present a method for retiming people in an ordinary, natural video—ma...
research
05/16/2021

Algorithmic Principles of Camera-based Respiratory Motion Extraction

Measuring the respiratory signal from a video based on body motion has b...
research
01/08/2020

Do As I Do: Transferring Human Motion and Appearance between Monocular Videos with Spatial and Temporal Constraints

Creating plausible virtual actors from images of real actors remains one...
research
03/08/2021

Behavior-Driven Synthesis of Human Dynamics

Generating and representing human behavior are of major importance for v...

Please sign up or login with your details

Forgot password? Click here to reset