Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data

06/13/2023
by   Stanislaw Szymanowicz, et al.
0

We present Viewset Diffusion: a framework for training image-conditioned 3D generative models from 2D data. Image-conditioned 3D generative models allow us to address the inherent ambiguity in single-view 3D reconstruction. Given one image of an object, there is often more than one possible 3D volume that matches the input image, because a single image never captures all sides of an object. Deterministic models are inherently limited to producing one possible reconstruction and therefore make mistakes in ambiguous settings. Modelling distributions of 3D shapes is challenging because 3D ground truth data is often not available. We propose to solve the issue of data availability by training a diffusion model which jointly denoises a multi-view image set.We constrain the output of Viewset Diffusion models to a single 3D volume per image set, guaranteeing consistent geometry. Training is done through reconstruction losses on renderings, allowing training with only three images per object. Our design of architecture and training scheme allows our model to perform 3D generation and generative, ambiguity-aware single-view reconstruction in a feed-forward manner. Project page: szymanowiczs.github.io/viewset-diffusion.

READ FULL TEXT

page 1

page 2

page 8

page 9

research
06/29/2023

One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization

Single image 3D reconstruction is an important but challenging task that...
research
04/13/2023

Learning Controllable 3D Diffusion Models from Single-view Images

Diffusion models have recently become the de-facto approach for generati...
research
06/01/2023

DiffRoom: Diffusion-based High-Quality 3D Room Reconstruction and Generation

We present DiffRoom, a novel framework for tackling the problem of high-...
research
06/02/2023

PolyDiffuse: Polygonal Shape Reconstruction via Guided Set Diffusion Models

This paper presents PolyDiffuse, a novel structured reconstruction algor...
research
08/05/2023

Generative Approach for Probabilistic Human Mesh Recovery using Diffusion Models

This work focuses on the problem of reconstructing a 3D human body mesh ...
research
04/13/2023

Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction

3D-aware image synthesis encompasses a variety of tasks, such as scene g...
research
08/30/2022

A Diffusion Model Predicts 3D Shapes from 2D Microscopy Images

Diffusion models are a class of generative models, showing superior perf...

Please sign up or login with your details

Forgot password? Click here to reset