Loading…

Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator

In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propos...

Full description

Saved in:
Bibliographic Details
Published in:IEEE robotics and automation letters 2019-10, Vol.4 (4), p.3774-3781
Main Authors: Chalvatzaki, Georgia, Papageorgiou, Xanthi S., Maragos, Petros, Tzafestas, Costas S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propose a novel approach using model-based reinforcement learning (MBRL) for adapting the control policy of the robotic assistant. This approach encapsulates our previous work on human tracking and gait analysis from RGB-D and laser streams into a human-in-the-loop decision making strategy. We use long short-term memory (LSTM) networks for designing a human motion intention model and a coupling parameters forecast model, leveraging on the outcome of human gait analysis. An initial LSTM-based policy network was trained via imitation learning from human demonstrations in a motion capture setup. This policy is then fine-tuned with the MBRL framework using tracking data from real patients. A thorough evaluation analysis proves the efficiency of the MBRL approach as a user-adaptive controller.
ISSN:2377-3766
2377-3766
DOI:10.1109/LRA.2019.2929996