MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network

02/18/2019
by   Junhao Cai, et al.
0

Data-driven approach for grasping shows significant advance recently. But these approaches usually require much training data. To increase the efficiency of grasping data collection, this paper presents a novel grasp training system including the whole pipeline from data collection to model inference. The system can collect effective grasp sample with a corrective strategy assisted by antipodal grasp rule, and we design an affordance interpreter network to predict pixelwise grasp affordance map. We define graspability, ungraspability and background as grasp affordances. The key advantage of our system is that the pixel-level affordance interpreter network trained with only a small number of grasp samples under antipodal rule can achieve significant performance on totally unseen objects and backgrounds. The training sample is only collected in simulation. Extensive qualitative and quantitative experiments demonstrate the accuracy and robustness of our proposed approach. In the real-world grasp experiments, we achieve a grasp success rate of 93 and 91 We also achieve 87 using only RGB image, when changing the background textures, it also performs well and can achieve even 94 outperforms current state-of-the-art methods.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset