Loading…

Learning latent actions to control assistive robots

Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional ; however, the interfaces people must use to control their robots are low-dimensional . Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot...

Full description

Saved in:
Bibliographic Details
Published in:Autonomous robots 2022-01, Vol.46 (1), p.115-147
Main Authors: Losey, Dylan P., Jeon, Hong Jun, Li, Mengxi, Srinivasan, Krishnan, Mandlekar, Ajay, Garg, Animesh, Bohg, Jeannette, Sadigh, Dorsa
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93
cites cdi_FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93
container_end_page 147
container_issue 1
container_start_page 115
container_title Autonomous robots
container_volume 46
creator Losey, Dylan P.
Jeon, Hong Jun
Li, Mengxi
Srinivasan, Krishnan
Mandlekar, Ajay
Garg, Animesh
Bohg, Jeannette
Sadigh, Dorsa
description Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional ; however, the interfaces people must use to control their robots are low-dimensional . Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x – y plane, in another mode the joystick controls the robot’s z – yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions . We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.
doi_str_mv 10.1007/s10514-021-10005-w
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8335729</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2559675315</sourcerecordid><originalsourceid>FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93</originalsourceid><addsrcrecordid>eNp9kUtLAzEUhYMotlb_gAsZcONmNO9MNoIUX1Bwo-uQSdM6ZZrUJG3x35s6tT4WrsLlfPfk3nsAOEXwEkEoriKCDNESYlTmGrJyvQf6iAlSCobFPuhDiWXJmCQ9cBTjLDNSQHgIeoQSzhmv-oCMrA6ucdOi1cm6VGiTGu9ikXxhvEvBt4WOsYmpWdki-NqneAwOJrqN9mT7DsDL3e3z8KEcPd0_Dm9GpaGCprLCnFtZV9YaKZCt601NqcBjK7GBlBjEJhWRlIvaSCahnAgxlkjXWYdakgG47nwXy3puxyaPF3SrFqGZ6_CuvG7Ub8U1r2rqV6oihAm8MbjYGgT_trQxqXkTjW1b7axfRoXzbbhgBLGMnv9BZ34ZXF5PYY4Zp7yiKFO4o0zwMQY72Q2DoNpkorpMVM5EfWai1rnp7Ocau5avEDJAOiBmyU1t-P77H9sPjNKXow</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2625646841</pqid></control><display><type>article</type><title>Learning latent actions to control assistive robots</title><source>Springer Nature:Jisc Collections:Springer Nature Read and Publish 2023-2025: Springer Reading List</source><creator>Losey, Dylan P. ; Jeon, Hong Jun ; Li, Mengxi ; Srinivasan, Krishnan ; Mandlekar, Ajay ; Garg, Animesh ; Bohg, Jeannette ; Sadigh, Dorsa</creator><creatorcontrib>Losey, Dylan P. ; Jeon, Hong Jun ; Li, Mengxi ; Srinivasan, Krishnan ; Mandlekar, Ajay ; Garg, Animesh ; Bohg, Jeannette ; Sadigh, Dorsa</creatorcontrib><description>Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional ; however, the interfaces people must use to control their robots are low-dimensional . Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x – y plane, in another mode the joystick controls the robot’s z – yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions . We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.</description><identifier>ISSN: 0929-5593</identifier><identifier>EISSN: 1573-7527</identifier><identifier>DOI: 10.1007/s10514-021-10005-w</identifier><identifier>PMID: 34366568</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Artificial Intelligence ; Computer Imaging ; Control ; Controllability ; Engineering ; Joysticks ; Learning ; Mapping ; Mechatronics ; Pattern Recognition and Graphics ; People with disabilities ; Robot arms ; Robot control ; Robot dynamics ; Robotics ; Robotics and Automation ; Robots ; Service robots ; Soy products ; Special Issue on Robotics: Science and Systems 2020 ; Tofu ; Vision ; Yaw</subject><ispartof>Autonomous robots, 2022-01, Vol.46 (1), p.115-147</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93</citedby><cites>FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/34366568$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Losey, Dylan P.</creatorcontrib><creatorcontrib>Jeon, Hong Jun</creatorcontrib><creatorcontrib>Li, Mengxi</creatorcontrib><creatorcontrib>Srinivasan, Krishnan</creatorcontrib><creatorcontrib>Mandlekar, Ajay</creatorcontrib><creatorcontrib>Garg, Animesh</creatorcontrib><creatorcontrib>Bohg, Jeannette</creatorcontrib><creatorcontrib>Sadigh, Dorsa</creatorcontrib><title>Learning latent actions to control assistive robots</title><title>Autonomous robots</title><addtitle>Auton Robot</addtitle><addtitle>Auton Robots</addtitle><description>Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional ; however, the interfaces people must use to control their robots are low-dimensional . Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x – y plane, in another mode the joystick controls the robot’s z – yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions . We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.</description><subject>Artificial Intelligence</subject><subject>Computer Imaging</subject><subject>Control</subject><subject>Controllability</subject><subject>Engineering</subject><subject>Joysticks</subject><subject>Learning</subject><subject>Mapping</subject><subject>Mechatronics</subject><subject>Pattern Recognition and Graphics</subject><subject>People with disabilities</subject><subject>Robot arms</subject><subject>Robot control</subject><subject>Robot dynamics</subject><subject>Robotics</subject><subject>Robotics and Automation</subject><subject>Robots</subject><subject>Service robots</subject><subject>Soy products</subject><subject>Special Issue on Robotics: Science and Systems 2020</subject><subject>Tofu</subject><subject>Vision</subject><subject>Yaw</subject><issn>0929-5593</issn><issn>1573-7527</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kUtLAzEUhYMotlb_gAsZcONmNO9MNoIUX1Bwo-uQSdM6ZZrUJG3x35s6tT4WrsLlfPfk3nsAOEXwEkEoriKCDNESYlTmGrJyvQf6iAlSCobFPuhDiWXJmCQ9cBTjLDNSQHgIeoQSzhmv-oCMrA6ucdOi1cm6VGiTGu9ikXxhvEvBt4WOsYmpWdki-NqneAwOJrqN9mT7DsDL3e3z8KEcPd0_Dm9GpaGCprLCnFtZV9YaKZCt601NqcBjK7GBlBjEJhWRlIvaSCahnAgxlkjXWYdakgG47nwXy3puxyaPF3SrFqGZ6_CuvG7Ub8U1r2rqV6oihAm8MbjYGgT_trQxqXkTjW1b7axfRoXzbbhgBLGMnv9BZ34ZXF5PYY4Zp7yiKFO4o0zwMQY72Q2DoNpkorpMVM5EfWai1rnp7Ocau5avEDJAOiBmyU1t-P77H9sPjNKXow</recordid><startdate>20220101</startdate><enddate>20220101</enddate><creator>Losey, Dylan P.</creator><creator>Jeon, Hong Jun</creator><creator>Li, Mengxi</creator><creator>Srinivasan, Krishnan</creator><creator>Mandlekar, Ajay</creator><creator>Garg, Animesh</creator><creator>Bohg, Jeannette</creator><creator>Sadigh, Dorsa</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>S0W</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>20220101</creationdate><title>Learning latent actions to control assistive robots</title><author>Losey, Dylan P. ; Jeon, Hong Jun ; Li, Mengxi ; Srinivasan, Krishnan ; Mandlekar, Ajay ; Garg, Animesh ; Bohg, Jeannette ; Sadigh, Dorsa</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial Intelligence</topic><topic>Computer Imaging</topic><topic>Control</topic><topic>Controllability</topic><topic>Engineering</topic><topic>Joysticks</topic><topic>Learning</topic><topic>Mapping</topic><topic>Mechatronics</topic><topic>Pattern Recognition and Graphics</topic><topic>People with disabilities</topic><topic>Robot arms</topic><topic>Robot control</topic><topic>Robot dynamics</topic><topic>Robotics</topic><topic>Robotics and Automation</topic><topic>Robots</topic><topic>Service robots</topic><topic>Soy products</topic><topic>Special Issue on Robotics: Science and Systems 2020</topic><topic>Tofu</topic><topic>Vision</topic><topic>Yaw</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Losey, Dylan P.</creatorcontrib><creatorcontrib>Jeon, Hong Jun</creatorcontrib><creatorcontrib>Li, Mengxi</creatorcontrib><creatorcontrib>Srinivasan, Krishnan</creatorcontrib><creatorcontrib>Mandlekar, Ajay</creatorcontrib><creatorcontrib>Garg, Animesh</creatorcontrib><creatorcontrib>Bohg, Jeannette</creatorcontrib><creatorcontrib>Sadigh, Dorsa</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DELNET Engineering &amp; Technology Collection</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Autonomous robots</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Losey, Dylan P.</au><au>Jeon, Hong Jun</au><au>Li, Mengxi</au><au>Srinivasan, Krishnan</au><au>Mandlekar, Ajay</au><au>Garg, Animesh</au><au>Bohg, Jeannette</au><au>Sadigh, Dorsa</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning latent actions to control assistive robots</atitle><jtitle>Autonomous robots</jtitle><stitle>Auton Robot</stitle><addtitle>Auton Robots</addtitle><date>2022-01-01</date><risdate>2022</risdate><volume>46</volume><issue>1</issue><spage>115</spage><epage>147</epage><pages>115-147</pages><issn>0929-5593</issn><eissn>1573-7527</eissn><abstract>Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional ; however, the interfaces people must use to control their robots are low-dimensional . Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x – y plane, in another mode the joystick controls the robot’s z – yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions . We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.</abstract><cop>New York</cop><pub>Springer US</pub><pmid>34366568</pmid><doi>10.1007/s10514-021-10005-w</doi><tpages>33</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0929-5593
ispartof Autonomous robots, 2022-01, Vol.46 (1), p.115-147
issn 0929-5593
1573-7527
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8335729
source Springer Nature:Jisc Collections:Springer Nature Read and Publish 2023-2025: Springer Reading List
subjects Artificial Intelligence
Computer Imaging
Control
Controllability
Engineering
Joysticks
Learning
Mapping
Mechatronics
Pattern Recognition and Graphics
People with disabilities
Robot arms
Robot control
Robot dynamics
Robotics
Robotics and Automation
Robots
Service robots
Soy products
Special Issue on Robotics: Science and Systems 2020
Tofu
Vision
Yaw
title Learning latent actions to control assistive robots
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T22%3A01%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20latent%20actions%20to%20control%20assistive%20robots&rft.jtitle=Autonomous%20robots&rft.au=Losey,%20Dylan%20P.&rft.date=2022-01-01&rft.volume=46&rft.issue=1&rft.spage=115&rft.epage=147&rft.pages=115-147&rft.issn=0929-5593&rft.eissn=1573-7527&rft_id=info:doi/10.1007/s10514-021-10005-w&rft_dat=%3Cproquest_pubme%3E2559675315%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c474t-8266e9b8eec971ebb266e4472de92c043c15f839467bc95909f77d91abe920a93%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2625646841&rft_id=info:pmid/34366568&rfr_iscdi=true