Loading…
On-Policy and Pixel-Level Grasping Across the Gap Between Simulation and Reality
Grasp detection in cluttered scenes is a very challenging task for robots. Generating synthetic grasping data is a popular way to train and test grasp methods, as is Dex-Net; yet, these methods sample training grasps on 3-D synthetic object models, but evaluate at images or point clouds with differe...
Saved in:
Published in: | IEEE transactions on industrial electronics (1982) 2024-07, Vol.71 (7), p.7388-7399 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Grasp detection in cluttered scenes is a very challenging task for robots. Generating synthetic grasping data is a popular way to train and test grasp methods, as is Dex-Net; yet, these methods sample training grasps on 3-D synthetic object models, but evaluate at images or point clouds with different sample distributions, which reduces performance due to covariate shift and sparse grasp labels. To solve existing problems, we propose a novel on-policy grasp detection method for parallel grippers, which can train and test on the approximate distribution with dense pixel-level grasp labels generated on RGB-D images. An Orthographic-Depth Grasp Generation (ODG-Generation) method is proposed to generate an orthographic depth image through a new imaging model of projecting points in orthographic; then this method generates multiple candidate grasps for each pixel and obtains robust positive grasps through flatness detection, force-closure metric and collision detection. Then, a comprehensive Pixel-Level Grasp Pose Dataset (PLGP-Dataset) is constructed, which is the first pixel-level grasp dataset, with the on-policy distribution. Lastly, we build a grasp detection network with a novel data augmentation process for imbalance training. Experiments show that our on-policy method can partially overcome the gap between simulation and reality, and achieves the best performance. |
---|---|
ISSN: | 0278-0046 1557-9948 |
DOI: | 10.1109/TIE.2023.3301529 |