SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction

12/01/2022
by   Zhizhuo Zhou, et al.
0

We propose SparseFusion, a sparse view 3D reconstruction approach that unifies recent advances in neural rendering and probabilistic image generation. Existing approaches typically build on neural rendering with re-projected features but fail to generate unseen regions or handle uncertainty under large viewpoint changes. Alternate methods treat this as a (probabilistic) 2D synthesis task, and while they can generate plausible 2D images, they do not infer a consistent underlying 3D. However, we find that this trade-off between 3D consistency and probabilistic image generation does not need to exist. In fact, we show that geometric consistency and generative inference can be complementary in a mode-seeking behavior. By distilling a 3D consistent scene representation from a view-conditioned latent diffusion model, we are able to recover a plausible 3D representation whose renderings are both accurate and realistic. We evaluate our approach across 51 categories in the CO3D dataset and show that it outperforms existing methods, in both distortion and perception metrics, for sparse-view novel view synthesis.

READ FULL TEXT

page 1

page 3

page 6

page 8

research
04/05/2023

Generative Novel View Synthesis with 3D-Aware Diffusion Models

We present a diffusion-based model for 3D-aware generative novel view sy...
research
09/07/2023

SyncDreamer: Generating Multiview-consistent Images from a Single-view Image

In this paper, we present a novel diffusion model called that generates ...
research
11/17/2022

RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation

Diffusion models currently achieve state-of-the-art performance for both...
research
12/02/2022

DiffRF: Rendering-Guided 3D Radiance Field Diffusion

We introduce DiffRF, a novel approach for 3D radiance field synthesis ba...
research
04/17/2017

Multi-View Image Generation from a Single-View

This paper addresses a challenging problem -- how to generate multi-view...
research
07/20/2022

2D GANs Meet Unsupervised Single-view 3D Reconstruction

Recent research has shown that controllable image generation based on pr...
research
03/08/2017

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

We present a transformation-grounded image generation network for novel ...

Please sign up or login with your details

Forgot password? Click here to reset