SymmNeRF: Learning to Explore Symmetry Prior for Single-View View Synthesis

by   Xingyi Li, et al.

We study the problem of novel view synthesis of objects from a single image. Existing methods have demonstrated the potential in single-view view synthesis. However, they still fail to recover the fine appearance details, especially in self-occluded areas. This is because a single view only provides limited information. We observe that manmade objects usually exhibit symmetric appearances, which introduce additional prior knowledge. Motivated by this, we investigate the potential performance gains of explicitly embedding symmetry into the scene representation. In this paper, we propose SymmNeRF, a neural radiance field (NeRF) based framework that combines local and global conditioning under the introduction of symmetry priors. In particular, SymmNeRF takes the pixel-aligned image features and the corresponding symmetric features as extra inputs to the NeRF, whose parameters are generated by a hypernetwork. As the parameters are conditioned on the image-encoded latent codes, SymmNeRF is thus scene-independent and can generalize to new scenes. Experiments on synthetic and realworld datasets show that SymmNeRF synthesizes novel views with more details regardless of the pose transformation, and demonstrates good generalization when applied to unseen objects. Code is available at:


page 20

page 23

page 25

page 26


Novel View Synthesis from a Single Image via Unsupervised learning

View synthesis aims to generate novel views from one or more given sourc...

PixelHuman: Animatable Neural Radiance Fields from Few Images

In this paper, we propose PixelHuman, a novel human rendering model that...

PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for Single-Image Novel View Synthesis

We present PVSeRF, a learning framework that reconstructs neural radianc...

Learning to Detect 3D Reflection Symmetry for Single-View Reconstruction

3D reconstruction from a single RGB image is a challenging problem in co...

Novel View Synthesis with Diffusion Models

We present 3DiM, a diffusion model for 3D novel view synthesis, which is...

AutoRF: Learning 3D Object Radiance Fields from Single View Observations

We introduce AutoRF - a new approach for learning neural 3D object repre...

Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View with a Reachability Prior

In this paper, we investigate the problem of anticipating future dynamic...

Please sign up or login with your details

Forgot password? Click here to reset