HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars

12/19/2021
by   Tao Hu, et al.
0

We propose a novel neural rendering pipeline, Hybrid Volumetric-Textural Rendering (HVTR), which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality. First, we learn to encode articulated human motions on a dense UV manifold of the human body surface. To handle complicated motions (e.g., self-occlusions), we then leverage the encoded information on the UV manifold to construct a 3D volumetric representation based on a dynamic pose-conditioned neural radiance field. While this allows us to represent 3D geometry with changing topology, volumetric rendering is computationally heavy. Hence we employ only a rough volumetric representation using a pose-conditioned downsampled neural radiance field (PD-NeRF), which we can render efficiently at low resolutions. In addition, we learn 2D textural features that are fused with rendered volumetric features in image space. The key advantage of our approach is that we can then convert the fused features into a high resolution, high-quality avatar by a fast GAN-based textural renderer. We demonstrate that hybrid rendering enables HVTR to handle complicated motions, render high-quality avatars under user-controlled poses/shapes and even loose clothing, and most importantly, be fast at inference time. Our experimental results also demonstrate state-of-the-art quantitative results.

READ FULL TEXT

page 3

page 4

page 6

page 7

page 9

page 13

page 14

page 15

research
03/21/2023

Real-time volumetric rendering of dynamic humans

We present a method for fast 3D reconstruction and real-time rendering o...
research
03/29/2022

DRaCoN – Differentiable Rasterization Conditioned Neural Radiance Fields for Articulated Avatars

Acquisition and creation of digital human avatars is an important proble...
research
12/05/2020

Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction

We present dynamic neural radiance fields for modeling the appearance an...
research
03/10/2023

You Only Train Once: Multi-Identity Free-Viewpoint Neural Human Rendering from Monocular Videos

We introduce You Only Train Once (YOTO), a dynamic human generation fram...
research
07/30/2020

Quantitative Distortion Analysis of Flattening Applied to the Scroll from En-Gedi

Non-invasive volumetric imaging can now capture the internal structure a...
research
09/02/2020

Going beyond Free Viewpoint: Creating Animatable Volumetric Video of Human Performances

In this paper, we present an end-to-end pipeline for the creation of hig...
research
02/12/2022

NeuVV: Neural Volumetric Videos with Immersive Rendering and Editing

Some of the most exciting experiences that Metaverse promises to offer, ...

Please sign up or login with your details

Forgot password? Click here to reset