Zero-Shot Text-Guided Object Generation with Dream Fields

12/02/2021
by   Ajay Jain, et al.
0

We combine neural rendering with multi-modal image and text representations to synthesize diverse 3D objects solely from natural language descriptions. Our method, Dream Fields, can generate the geometry and color of a wide range of objects without 3D supervision. Due to the scarcity of diverse, captioned 3D data, prior methods only generate objects from a handful of categories, such as ShapeNet. Instead, we guide generation with image-text models pre-trained on large datasets of captioned images from the web. Our method optimizes a Neural Radiance Field from many camera views so that rendered images score highly with a target caption according to a pre-trained CLIP model. To improve fidelity and visual quality, we introduce simple geometric priors, including sparsity-inducing transmittance regularization, scene bounds, and new MLP architectures. In experiments, Dream Fields produce realistic, multi-view consistent object geometry and color from a variety of natural language captions.

READ FULL TEXT

page 4

page 7

page 8

page 12

research
05/19/2023

Text2NeRF: Text-Driven 3D Scene Generation with Neural Radiance Fields

Text-driven 3D scene generation is widely applicable to video gaming, fi...
research
12/02/2022

3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation

Text-guided 3D object generation aims to generate 3D objects described b...
research
07/04/2022

LaTeRF: Label and Text Driven Object Radiance Fields

Obtaining 3D object representations is important for creating photo-real...
research
09/28/2022

360FusionNeRF: Panoramic Neural Radiance Fields with Joint Guidance

We present a method to synthesize novel views from a single 360^∘ panora...
research
07/10/2023

Articulated 3D Head Avatar Generation using Text-to-Image Diffusion Models

The ability to generate diverse 3D articulated head avatars is vital to ...
research
10/04/2020

Holistic static and animated 3D scene generation from diverse text descriptions

We propose a framework for holistic static and animated 3D scene generat...
research
06/13/2023

Adding 3D Geometry Control to Diffusion Models

Diffusion models have emerged as a powerful method of generative modelin...

Please sign up or login with your details

Forgot password? Click here to reset