Learning Object Relations with Graph Neural Networks for Target-Driven Grasping in Dense Clutter

03/02/2022
by   Xibai Lou, et al.
0

Robots in the real world frequently come across identical objects in dense clutter. When evaluating grasp poses in these scenarios, a target-driven grasping system requires knowledge of spatial relations between scene objects (e.g., proximity, adjacency, and occlusions). To efficiently complete this task, we propose a target-driven grasping system that simultaneously considers object relations and predicts 6-DoF grasp poses. A densely cluttered scene is first formulated as a grasp graph with nodes representing object geometries in the grasp coordinate frame and edges indicating spatial relations between the objects. We design a Grasp Graph Neural Network (G2N2) that evaluates the grasp graph and finds the most feasible 6-DoF grasp pose for a target object. Additionally, we develop a shape completion-assisted grasp pose sampling method that improves sample quality and consequently grasping efficiency. We compare our method against several baselines in both simulated and real settings. In real-world experiments with novel objects, our approach achieves a 77.78 grasping accuracy in densely cluttered scenarios, surpassing the best-performing baseline by more than 15 at https://sites.google.com/umn.edu/graph-grasping.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset