Text-driven Video Prediction

10/06/2022
by   Xue Song, et al.
0

Current video generation models usually convert signals indicating appearance and motion received from inputs (e.g., image, text) or latent spaces (e.g., noise vectors) into consecutive frames, fulfilling a stochastic generation process for the uncertainty introduced by latent code sampling. However, this generation pattern lacks deterministic constraints for both appearance and motion, leading to uncontrollable and undesirable outcomes. To this end, we propose a new task called Text-driven Video Prediction (TVP). Taking the first frame and text caption as inputs, this task aims to synthesize the following frames. Specifically, appearance and motion components are provided by the image and caption separately. The key to addressing the TVP task depends on fully exploring the underlying motion information in text descriptions, thus facilitating plausible video generation. In fact, this task is intrinsically a cause-and-effect problem, as the text content directly influences the motion changes of frames. To investigate the capability of text in causal inference for progressive motion information, our TVP framework contains a Text Inference Module (TIM), producing step-wise embeddings to regulate motion inference for subsequent frames. In particular, a refinement mechanism incorporating global motion semantics guarantees coherent generation. Extensive experiments are conducted on Something-Something V2 and Single Moving MNIST datasets. Experimental results demonstrate that our model achieves better results over other baselines, verifying the effectiveness of the proposed framework.

READ FULL TEXT

page 1

page 3

page 5

page 7

research
12/06/2021

Make It Move: Controllable Image-to-Video Generation with Text Descriptions

Generating controllable videos conforming to user intentions is an appea...
research
04/17/2023

Text2Performer: Text-Driven Human Video Generation

Text-driven content creation has evolved to be a transformative techniqu...
research
07/04/2017

Skeleton-aided Articulated Motion Generation

This work make the first attempt to generate articulated human motion se...
research
08/30/2023

MMVP: Motion-Matrix-based Video Prediction

A central challenge of video prediction lies where the system has to rea...
research
09/04/2020

TiVGAN: Text to Image to Video Generation with Step-by-Step Evolutionary Generator

Advances in technology have led to the development of methods that can c...
research
09/12/2023

Fg-T2M: Fine-Grained Text-Driven Human Motion Generation via Diffusion Model

Text-driven human motion generation in computer vision is both significa...
research
07/06/2023

Synthesizing Artistic Cinemagraphs from Text

We introduce Text2Cinemagraph, a fully automated method for creating cin...

Please sign up or login with your details

Forgot password? Click here to reset