Loading…

Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator

In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propos...

Full description

Saved in:
Bibliographic Details
Published in:IEEE robotics and automation letters 2019-10, Vol.4 (4), p.3774-3781
Main Authors: Chalvatzaki, Georgia, Papageorgiou, Xanthi S., Maragos, Petros, Tzafestas, Costas S.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3
cites cdi_FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3
container_end_page 3781
container_issue 4
container_start_page 3774
container_title IEEE robotics and automation letters
container_volume 4
creator Chalvatzaki, Georgia
Papageorgiou, Xanthi S.
Maragos, Petros
Tzafestas, Costas S.
description In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propose a novel approach using model-based reinforcement learning (MBRL) for adapting the control policy of the robotic assistant. This approach encapsulates our previous work on human tracking and gait analysis from RGB-D and laser streams into a human-in-the-loop decision making strategy. We use long short-term memory (LSTM) networks for designing a human motion intention model and a coupling parameters forecast model, leveraging on the outcome of human gait analysis. An initial LSTM-based policy network was trained via imitation learning from human demonstrations in a motion capture setup. This policy is then fine-tuned with the MBRL framework using tracking data from real patients. A thorough evaluation analysis proves the efficiency of the MBRL approach as a user-adaptive controller.
doi_str_mv 10.1109/LRA.2019.2929996
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8767993</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8767993</ieee_id><sourcerecordid>2296106200</sourcerecordid><originalsourceid>FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3</originalsourceid><addsrcrecordid>eNpNkEtLxDAUhYMoOIyzF9wEXHfMo00ad3XwBRWhKC5DmqTasdPUJLPw35txBnF1D9zv3McB4ByjJcZIXNVNtSQIiyURRAjBjsCMUM4zyhk7_qdPwSKENUIIF4RTUcyAq63yI4wOVkZNcSceths1wjc1fPbj-zWs4JMzdshuVLAGNrYfO-e13dgxwl9zomA1Td4p_QFTDyrYuNbFXsMqhD5ElcjGDYOKzp-Bk04NwS4OdQ5e725fVg9Z_Xz_uKrqTBOCY4ZtoU3JMMsLnBtNkClzXBStaluLacdRwXCrjOmYETnhpNW4zImxLD3JSNnSObjcz013fW1tiHLttn5MKyUhgmHECEKJQntKexeCt52cfL9R_ltiJHfJypSs3CUrD8kmy8Xe0ltr__CSMy4EpT97bXOY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2296106200</pqid></control><display><type>article</type><title>Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Chalvatzaki, Georgia ; Papageorgiou, Xanthi S. ; Maragos, Petros ; Tzafestas, Costas S.</creator><creatorcontrib>Chalvatzaki, Georgia ; Papageorgiou, Xanthi S. ; Maragos, Petros ; Tzafestas, Costas S.</creatorcontrib><description>In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propose a novel approach using model-based reinforcement learning (MBRL) for adapting the control policy of the robotic assistant. This approach encapsulates our previous work on human tracking and gait analysis from RGB-D and laser streams into a human-in-the-loop decision making strategy. We use long short-term memory (LSTM) networks for designing a human motion intention model and a coupling parameters forecast model, leveraging on the outcome of human gait analysis. An initial LSTM-based policy network was trained via imitation learning from human demonstrations in a motion capture setup. This policy is then fine-tuned with the MBRL framework using tracking data from real patients. A thorough evaluation analysis proves the efficiency of the MBRL approach as a user-adaptive controller.</description><identifier>ISSN: 2377-3766</identifier><identifier>EISSN: 2377-3766</identifier><identifier>DOI: 10.1109/LRA.2019.2929996</identifier><identifier>CODEN: IRALC6</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Adaptation models ; automation in life sciences: biotechnology ; Decision analysis ; Decision making ; Gait ; Human motion ; Human-centered robotics ; learning and adaptive systems ; Legged locomotion ; Machine learning ; Motion capture ; Navigation ; pharmaceutical and health care ; Predictive models ; Robot dynamics ; Robot kinematics ; Robot sensing systems ; Robotics ; Tracking ; Walking</subject><ispartof>IEEE robotics and automation letters, 2019-10, Vol.4 (4), p.3774-3781</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3</citedby><cites>FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3</cites><orcidid>0000-0002-5055-199X ; 0000-0003-2579-8648 ; 0000-0003-1545-9191 ; 0000-0003-0534-2707</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8767993$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27922,27923,54794</link.rule.ids></links><search><creatorcontrib>Chalvatzaki, Georgia</creatorcontrib><creatorcontrib>Papageorgiou, Xanthi S.</creatorcontrib><creatorcontrib>Maragos, Petros</creatorcontrib><creatorcontrib>Tzafestas, Costas S.</creatorcontrib><title>Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator</title><title>IEEE robotics and automation letters</title><addtitle>LRA</addtitle><description>In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propose a novel approach using model-based reinforcement learning (MBRL) for adapting the control policy of the robotic assistant. This approach encapsulates our previous work on human tracking and gait analysis from RGB-D and laser streams into a human-in-the-loop decision making strategy. We use long short-term memory (LSTM) networks for designing a human motion intention model and a coupling parameters forecast model, leveraging on the outcome of human gait analysis. An initial LSTM-based policy network was trained via imitation learning from human demonstrations in a motion capture setup. This policy is then fine-tuned with the MBRL framework using tracking data from real patients. A thorough evaluation analysis proves the efficiency of the MBRL approach as a user-adaptive controller.</description><subject>Adaptation models</subject><subject>automation in life sciences: biotechnology</subject><subject>Decision analysis</subject><subject>Decision making</subject><subject>Gait</subject><subject>Human motion</subject><subject>Human-centered robotics</subject><subject>learning and adaptive systems</subject><subject>Legged locomotion</subject><subject>Machine learning</subject><subject>Motion capture</subject><subject>Navigation</subject><subject>pharmaceutical and health care</subject><subject>Predictive models</subject><subject>Robot dynamics</subject><subject>Robot kinematics</subject><subject>Robot sensing systems</subject><subject>Robotics</subject><subject>Tracking</subject><subject>Walking</subject><issn>2377-3766</issn><issn>2377-3766</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNpNkEtLxDAUhYMoOIyzF9wEXHfMo00ad3XwBRWhKC5DmqTasdPUJLPw35txBnF1D9zv3McB4ByjJcZIXNVNtSQIiyURRAjBjsCMUM4zyhk7_qdPwSKENUIIF4RTUcyAq63yI4wOVkZNcSceths1wjc1fPbj-zWs4JMzdshuVLAGNrYfO-e13dgxwl9zomA1Td4p_QFTDyrYuNbFXsMqhD5ElcjGDYOKzp-Bk04NwS4OdQ5e725fVg9Z_Xz_uKrqTBOCY4ZtoU3JMMsLnBtNkClzXBStaluLacdRwXCrjOmYETnhpNW4zImxLD3JSNnSObjcz013fW1tiHLttn5MKyUhgmHECEKJQntKexeCt52cfL9R_ltiJHfJypSs3CUrD8kmy8Xe0ltr__CSMy4EpT97bXOY</recordid><startdate>20191001</startdate><enddate>20191001</enddate><creator>Chalvatzaki, Georgia</creator><creator>Papageorgiou, Xanthi S.</creator><creator>Maragos, Petros</creator><creator>Tzafestas, Costas S.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-5055-199X</orcidid><orcidid>https://orcid.org/0000-0003-2579-8648</orcidid><orcidid>https://orcid.org/0000-0003-1545-9191</orcidid><orcidid>https://orcid.org/0000-0003-0534-2707</orcidid></search><sort><creationdate>20191001</creationdate><title>Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator</title><author>Chalvatzaki, Georgia ; Papageorgiou, Xanthi S. ; Maragos, Petros ; Tzafestas, Costas S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Adaptation models</topic><topic>automation in life sciences: biotechnology</topic><topic>Decision analysis</topic><topic>Decision making</topic><topic>Gait</topic><topic>Human motion</topic><topic>Human-centered robotics</topic><topic>learning and adaptive systems</topic><topic>Legged locomotion</topic><topic>Machine learning</topic><topic>Motion capture</topic><topic>Navigation</topic><topic>pharmaceutical and health care</topic><topic>Predictive models</topic><topic>Robot dynamics</topic><topic>Robot kinematics</topic><topic>Robot sensing systems</topic><topic>Robotics</topic><topic>Tracking</topic><topic>Walking</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chalvatzaki, Georgia</creatorcontrib><creatorcontrib>Papageorgiou, Xanthi S.</creatorcontrib><creatorcontrib>Maragos, Petros</creatorcontrib><creatorcontrib>Tzafestas, Costas S.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE robotics and automation letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chalvatzaki, Georgia</au><au>Papageorgiou, Xanthi S.</au><au>Maragos, Petros</au><au>Tzafestas, Costas S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator</atitle><jtitle>IEEE robotics and automation letters</jtitle><stitle>LRA</stitle><date>2019-10-01</date><risdate>2019</risdate><volume>4</volume><issue>4</issue><spage>3774</spage><epage>3781</epage><pages>3774-3781</pages><issn>2377-3766</issn><eissn>2377-3766</eissn><coden>IRALC6</coden><abstract>In this letter, we tackle the problem of adapting the motion of a robotic assistant rollator to patients with different mobility status. The goal is to achieve a coupled human-robot motion in a front-following setting as if the patient was pushing the rollator himself/herself. To this end, we propose a novel approach using model-based reinforcement learning (MBRL) for adapting the control policy of the robotic assistant. This approach encapsulates our previous work on human tracking and gait analysis from RGB-D and laser streams into a human-in-the-loop decision making strategy. We use long short-term memory (LSTM) networks for designing a human motion intention model and a coupling parameters forecast model, leveraging on the outcome of human gait analysis. An initial LSTM-based policy network was trained via imitation learning from human demonstrations in a motion capture setup. This policy is then fine-tuned with the MBRL framework using tracking data from real patients. A thorough evaluation analysis proves the efficiency of the MBRL approach as a user-adaptive controller.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/LRA.2019.2929996</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-5055-199X</orcidid><orcidid>https://orcid.org/0000-0003-2579-8648</orcidid><orcidid>https://orcid.org/0000-0003-1545-9191</orcidid><orcidid>https://orcid.org/0000-0003-0534-2707</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2377-3766
ispartof IEEE robotics and automation letters, 2019-10, Vol.4 (4), p.3774-3781
issn 2377-3766
2377-3766
language eng
recordid cdi_ieee_primary_8767993
source IEEE Electronic Library (IEL) Journals
subjects Adaptation models
automation in life sciences: biotechnology
Decision analysis
Decision making
Gait
Human motion
Human-centered robotics
learning and adaptive systems
Legged locomotion
Machine learning
Motion capture
Navigation
pharmaceutical and health care
Predictive models
Robot dynamics
Robot kinematics
Robot sensing systems
Robotics
Tracking
Walking
title Learn to Adapt to Human Walking: A Model-Based Reinforcement Learning Approach for a Robotic Assistant Rollator
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T14%3A35%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learn%20to%20Adapt%20to%20Human%20Walking:%20A%20Model-Based%20Reinforcement%20Learning%20Approach%20for%20a%20Robotic%20Assistant%20Rollator&rft.jtitle=IEEE%20robotics%20and%20automation%20letters&rft.au=Chalvatzaki,%20Georgia&rft.date=2019-10-01&rft.volume=4&rft.issue=4&rft.spage=3774&rft.epage=3781&rft.pages=3774-3781&rft.issn=2377-3766&rft.eissn=2377-3766&rft.coden=IRALC6&rft_id=info:doi/10.1109/LRA.2019.2929996&rft_dat=%3Cproquest_ieee_%3E2296106200%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c221t-1e5cd86164514dc20d84155babbe13f70561baddf6d94272bc1842de6766628b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2296106200&rft_id=info:pmid/&rft_ieee_id=8767993&rfr_iscdi=true