NARRATE: A Normal Assisted Free-View Portrait Stylizer

by   Youjia Wang, et al.

In this work, we propose NARRATE, a novel pipeline that enables simultaneously editing portrait lighting and perspective in a photorealistic manner. As a hybrid neural-physical face model, NARRATE leverages complementary benefits of geometry-aware generative approaches and normal-assisted physical face models. In a nutshell, NARRATE first inverts the input portrait to a coarse geometry and employs neural rendering to generate images resembling the input, as well as producing convincing pose changes. However, inversion step introduces mismatch, bringing low-quality images with less facial details. As such, we further estimate portrait normal to enhance the coarse geometry, creating a high-fidelity physical face model. In particular, we fuse the neural and physical renderings to compensate for the imperfect inversion, resulting in both realistic and view-consistent novel perspective images. In relighting stage, previous works focus on single view portrait relighting but ignoring consistency between different perspectives as well, leading unstable and inconsistent lighting effects for view changes. We extend Total Relighting to fix this problem by unifying its multi-view input normal maps with the physical face model. NARRATE conducts relighting with consistent normal maps, imposing cross-view constraints and exhibiting stable and coherent illumination effects. We experimentally demonstrate that NARRATE achieves more photorealistic, reliable results over prior works. We further bridge NARRATE with animation and style transfer tools, supporting pose change, light change, facial animation, and style transfer, either separately or in combination, all at a photographic quality. We showcase vivid free-view facial animations as well as 3D-aware relightable stylization, which help facilitate various AR/VR applications like virtual cinematography, 3D video conferencing, and post-production.


page 1

page 5

page 7

page 8

page 9

page 10

page 11

page 12


Designing a 3D-Aware StyleNeRF Encoder for Face Editing

GAN inversion has been exploited in many face manipulation tasks, but 2D...

IDE-3D: Interactive Disentangled Editing for High-Resolution 3D-aware Portrait Synthesis

Existing 3D-aware facial generation methods face a dilemma in quality ve...

Exemplar-Based 3D Portrait Stylization

Exemplar-based portrait stylization is widely attractive and highly desi...

BareSkinNet: De-makeup and De-lighting via 3D Face Reconstruction

We propose BareSkinNet, a novel method that simultaneously removes makeu...

High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation

3D video avatars can empower virtual communications by providing compres...

Deep Appearance Models for Face Rendering

We introduce a deep appearance model for rendering the human face. Inspi...

Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition

Facial makeup enriches the beauty of not only real humans but also virtu...

Please sign up or login with your details

Forgot password? Click here to reset