Loading…

A Constrained DMPs Framework for Robot Skills Learning and Generalization From Human Demonstrations

The dynamical movement primitives (DMPs) model is a useful tool for efficient robotic learning manipulation skills from human demonstrations and then generalizing these skills to fulfill new tasks. It is improved and applied for the cases with multiple constraints such as having obstacles or relativ...

Full description

Saved in:
Bibliographic Details
Published in:IEEE/ASME transactions on mechatronics 2021-12, Vol.26 (6), p.3265-3275
Main Authors: Lu, Zhenyu, Wang, Ning, Yang, Chenguang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The dynamical movement primitives (DMPs) model is a useful tool for efficient robotic learning manipulation skills from human demonstrations and then generalizing these skills to fulfill new tasks. It is improved and applied for the cases with multiple constraints such as having obstacles or relative distance limitation for multiagent formation. However, the improved DMPs should change additional terms according to the specified constraints of different tasks. In this article, we will propose a novel DMPs framework facing the constrained conditions for robotic skills generalization. First, we conclude the common characteristics of previous modified DMPs with constraints and propose a general DMPs framework with various classified constraints. Inspired by barrier Lyapunov functions (BLFs), an additional acceleration term of the general model is deduced to compensate tracking errors between the real and desired trajectories with constraints. Furthermore, we prove convergence of the generated path and make a discussion about advantages of the proposed method compared with the existing literature. Finally, in this article, we instantiate the novel framework through three experiments: obstacle avoidance in the static and dynamic environment and human-like cooperative manipulation, to certify its effectiveness.
ISSN:1083-4435
1941-014X
DOI:10.1109/TMECH.2021.3057022