Compositional 3D Scene Generation using Locally Conditioned Diffusion

03/21/2023
by   Ryan Po, et al.
0

Designing complex 3D scenes has been a tedious, manual process requiring domain expertise. Emerging text-to-3D generative models show great promise for making this task more intuitive, but existing approaches are limited to object-level generation. We introduce locally conditioned diffusion as an approach to compositional scene diffusion, providing control over semantic parts using text prompts and bounding boxes while ensuring seamless transitions between these parts. We demonstrate a score distillation sampling–based text-to-3D synthesis pipeline that enables compositional 3D scene generation at a higher fidelity than relevant baselines.

READ FULL TEXT

page 2

page 4

page 5

page 7

page 8

page 12

page 13

page 14

research
04/28/2023

SceneGenie: Scene Graph Guided Diffusion Models for Image Synthesis

Text-conditioned image generation has made significant progress in recen...
research
05/29/2023

Generating Driving Scenes with Diffusion

In this paper we describe a learned method of traffic scene generation d...
research
03/20/2023

Object-Centric Slot Diffusion

Despite remarkable recent advances, making object-centric learning work ...
research
03/28/2023

Visual Chain-of-Thought Diffusion Models

Recent progress with conditional image diffusion models has been stunnin...
research
01/15/2023

Diffusion-based Generation, Optimization, and Planning in 3D Scenes

We introduce SceneDiffuser, a conditional generative model for 3D scene ...
research
02/22/2023

Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC

Since their introduction, diffusion models have quickly become the preva...
research
05/25/2023

ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation

Score distillation sampling (SDS) has shown great promise in text-to-3D ...

Please sign up or login with your details

Forgot password? Click here to reset