D-NeRF: Neural Radiance Fields for Dynamic Scenes

11/27/2020
by   Albert Pumarola, et al.
8

Neural rendering techniques combining machine learning with geometric reasoning have arisen as one of the most promising approaches for synthesizing novel views of a scene from a sparse set of images. Among these, stands out the Neural radiance fields (NeRF), which trains a deep network to map 5D input coordinates (representing spatial location and viewing direction) into a volume density and view-dependent emitted radiance. However, despite achieving an unprecedented level of photorealism on the generated images, NeRF is only applicable to static scenes, where the same spatial location can be queried from different images. In this paper we introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain, allowing to reconstruct and render novel images of objects under rigid and non-rigid motions from a single camera moving around the scene. For this purpose we consider time as an additional input to the system, and split the learning process in two main stages: one that encodes the scene into a canonical space and another that maps this canonical representation into the deformed scene at a particular time. Both mappings are simultaneously learned using fully-connected networks. Once the networks are trained, D-NeRF can render novel images, controlling both the camera view and the time variable, and thus, the object movement. We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions. Code, model weights and the dynamic scenes dataset will be released.

READ FULL TEXT

page 1

page 4

page 6

page 7

page 8

research
03/19/2020

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

We present a method that achieves state-of-the-art results for synthesiz...
research
12/22/2020

STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in Motion with Neural Rendering

We present STaR, a novel method that performs Self-supervised Tracking a...
research
02/27/2023

BaLi-RF: Bandlimited Radiance Fields for Dynamic Scene Modeling

Reasoning the 3D structure of a non-rigid dynamic scene from a single mo...
research
08/16/2023

SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes

Existing methods for the 4D reconstruction of general, non-rigidly defor...
research
06/24/2021

HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields

Neural Radiance Fields (NeRF) are able to reconstruct scenes with unprec...
research
12/03/2021

CoNeRF: Controllable Neural Radiance Fields

We extend neural 3D representations to allow for intuitive and interpret...
research
06/08/2021

MoCo-Flow: Neural Motion Consensus Flow for Dynamic Humans in Stationary Monocular Cameras

Synthesizing novel views of dynamic humans from stationary monocular cam...

Please sign up or login with your details

Forgot password? Click here to reset