P^2GNet: Pose-Guided Point Cloud Generating Networks for 6-DoF Object Pose Estimation

12/19/2019
by   Peiyu Yu, et al.
0

Humans are able to perform fast and accurate object pose estimation even under severe occlusion by exploiting learned object model priors from everyday life. However, most recently proposed pose estimation algorithms neglect to utilize the information of object models, often end up with limited accuracy, and tend to fall short in cluttered scenes. In this paper, we present a novel learning-based model, Pose-Guided Point Cloud Generating Networks for 6D Object Pose Estimation (P^2GNet), designed to effectively exploit object model priors to facilitate 6D object pose estimation. We achieve this with an end-to-end estimation-by-generation workflow that combines the appearance information from the RGB-D image and the structure knowledge from object point cloud to enable accurate and robust pose estimation. Experiments on two commonly used benchmarks for 6D pose estimation, YCB-Video dataset and LineMOD dataset, demonstrate that P^2GNet outperforms the state-of-the-art method by a large margin and shows marked robustness towards heavy occlusion, while achieving real-time inference.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset