Video Extrapolation with an Invertible Linear Embedding

03/01/2019
by   Robert Pottorff, et al.
0

We predict future video frames from complex dynamic scenes, using an invertible neural network as the encoder of a nonlinear dynamic system with latent linear state evolution. Our invertible linear embedding (ILE) demonstrates successful learning, prediction and latent state inference. In contrast to other approaches, ILE does not use any explicit reconstruction loss or simplistic pixel-space assumptions. Instead, it leverages invertibility to optimize the likelihood of image sequences exactly, albeit indirectly. Comparison with a state-of-the-art method demonstrates the viability of our approach.

READ FULL TEXT
research
08/05/2021

SLAMP: Stochastic Latent Appearance and Motion Prediction

Motion is an important cue for video prediction and often utilized by se...
research
07/04/2018

Video Frame Interpolation by Plug-and-Play Deep Locally Linear Embedding

We propose a generative framework which takes on the video frame interpo...
research
04/27/2019

Improved Conditional VRNNs for Video Prediction

Predicting future frames for a video sequence is a challenging generativ...
research
09/22/2017

A Real-time Action Prediction Framework by Encoding Temporal Evolution

Anticipating future actions is a key component of intelligence, specific...
research
05/19/2023

Neural Foundations of Mental Simulation: Future Prediction of Latent Representations on Dynamic Scenes

Humans and animals have a rich and flexible understanding of the physica...
research
06/11/2018

Learning to Decompose and Disentangle Representations for Video Prediction

Our goal is to predict future video frames given a sequence of input fra...
research
03/31/2023

Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter Correction

This paper addresses the problem of rolling shutter correction in comple...

Please sign up or login with your details

Forgot password? Click here to reset