Loading…

Learning better generative models for dexterous, single-view grasping of novel objects

This paper concerns the problem of how to learn to grasp dexterously, so as to be able to then grasp novel objects seen only from a single viewpoint. Recently, progress has been made in data-efficient learning of generative grasp models that transfer well to novel objects. These generative grasp mod...

Full description

Saved in:
Bibliographic Details
Published in:The International journal of robotics research 2019-09, Vol.38 (10-11), p.1246-1267
Main Authors: Kopicki, Marek S, Belter, Dominik, Wyatt, Jeremy L
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper concerns the problem of how to learn to grasp dexterously, so as to be able to then grasp novel objects seen only from a single viewpoint. Recently, progress has been made in data-efficient learning of generative grasp models that transfer well to novel objects. These generative grasp models are learned from demonstration (LfD). One weakness is that, as this paper shall show, grasp transfer under challenging single-view conditions is unreliable. Second, the number of generative model elements increases linearly in the number of training examples. This, in turn, limits the potential of these generative models for generalization and continual improvement. In this paper, it is shown how to address these problems. Several technical contributions are made: (i) a view-based model of a grasp; (ii) a method for combining and compressing multiple grasp models; (iii) a new way of evaluating contacts that is used both to generate and to score grasps. Together, these improve grasp performance and reduce the number of models learned. These advances, in turn, allow the introduction of autonomous training, in which the robot learns from self-generated grasps. Evaluation on a challenging test set shows that, with innovations (i)–(iii) deployed, grasp transfer success increases from 55.1% to 81.6%. By adding autonomous training this rises to 87.8%. These differences are statistically significant. In total, across all experiments, 539 test grasps were executed on real objects.
ISSN:0278-3649
1741-3176
DOI:10.1177/0278364919865338