Loading…

Hybrid autonomous control for multi mobile robots

Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are inc...

Full description

Saved in:
Bibliographic Details
Published in:Advanced robotics 2004-01, Vol.18 (1), p.83-99
Main Authors: Ito, Kazuyuki, Gofuku, Akio
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3
cites cdi_FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3
container_end_page 99
container_issue 1
container_start_page 83
container_title Advanced robotics
container_volume 18
creator Ito, Kazuyuki
Gofuku, Akio
description Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are increased exponentially. Thus, application to complex systems, like a many redundant d.o.f. robot and multi-agent system, is very difficult. In the previous works in this field, applications were restricted to simple robots and small multi-agent systems, and because of restricted functions of the simple systems that have less redundancy, effectiveness of reinforcement learning is restricted. In our previous works, we had taken these problems into consideration and had proposed new reinforcement learning algorithm, 'Q-learning with dynamic structuring of exploration space based on GA (QDSEGA)'. Effectiveness of QDSEGA for redundant robots has been demonstrated using a 12-legged robot and a 50-link manipulator. However, previous works on QDSEGA were restricted to redundant robots and it was impossible to apply it to multi mobile robots. In this paper, we extend our previous work on QDSEGA by combining a rule-based distributed control and propose a hybrid autonomous control method for multi mobile robots. To demonstrate the effectiveness of the proposed method, simulations of a transportation task by 10 mobile robots are carried out. As a result, effective behaviors have been obtained.
doi_str_mv 10.1163/156855304322753317
format article
fullrecord <record><control><sourceid>proquest_infor</sourceid><recordid>TN_cdi_proquest_miscellaneous_20273462</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>20273462</sourcerecordid><originalsourceid>FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3</originalsourceid><addsrcrecordid>eNqFkE1LxDAQhoMouK7-AU89eatm8tkePMjiFyx40XNI0hQiabMmKbr_3i7rbcE9DbzzPMPwInQN-BZA0DvgouGcYkYJkZxSkCdosQvrOeWnaIFBtDU0gp2ji5w_McYNo3KB4GVrku8qPZU4xiFOubJxLCmGqo-pGqZQfDVE44OrUjSx5Et01uuQ3dXfXKKPp8f31Uu9fnt-XT2sa8toW2reSilBSMwJZxwkw8IY2zpGm5ZiaagwmPbQtaTrwUjXt0CwJtoaPS-IpUt0s7-7SfFrcrmowWfrQtCjm99UpIEGCwnHQUwkZYLMINmDNsWck-vVJvlBp60CrHY1qsMaZ-l-L_lxLmTQ3zGFThW9DTH1SY_WZ0X_9Zuj_qGmyk-hv8XWiTI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>20273462</pqid></control><display><type>article</type><title>Hybrid autonomous control for multi mobile robots</title><source>Taylor and Francis Science and Technology Collection</source><creator>Ito, Kazuyuki ; Gofuku, Akio</creator><creatorcontrib>Ito, Kazuyuki ; Gofuku, Akio</creatorcontrib><description>Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are increased exponentially. Thus, application to complex systems, like a many redundant d.o.f. robot and multi-agent system, is very difficult. In the previous works in this field, applications were restricted to simple robots and small multi-agent systems, and because of restricted functions of the simple systems that have less redundancy, effectiveness of reinforcement learning is restricted. In our previous works, we had taken these problems into consideration and had proposed new reinforcement learning algorithm, 'Q-learning with dynamic structuring of exploration space based on GA (QDSEGA)'. Effectiveness of QDSEGA for redundant robots has been demonstrated using a 12-legged robot and a 50-link manipulator. However, previous works on QDSEGA were restricted to redundant robots and it was impossible to apply it to multi mobile robots. In this paper, we extend our previous work on QDSEGA by combining a rule-based distributed control and propose a hybrid autonomous control method for multi mobile robots. To demonstrate the effectiveness of the proposed method, simulations of a transportation task by 10 mobile robots are carried out. As a result, effective behaviors have been obtained.</description><identifier>ISSN: 0169-1864</identifier><identifier>EISSN: 1568-5535</identifier><identifier>DOI: 10.1163/156855304322753317</identifier><language>eng</language><publisher>Taylor &amp; Francis Group</publisher><subject>AUTONOMOUS ; CONTROL ; MULTI MOBILE ROBOTS ; QDSEGA ; REDUNDANT SYSTEM ; REINFORCEMENT LEARNING</subject><ispartof>Advanced robotics, 2004-01, Vol.18 (1), p.83-99</ispartof><rights>Copyright Taylor &amp; Francis Group, LLC 2004</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3</citedby><cites>FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Ito, Kazuyuki</creatorcontrib><creatorcontrib>Gofuku, Akio</creatorcontrib><title>Hybrid autonomous control for multi mobile robots</title><title>Advanced robotics</title><description>Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are increased exponentially. Thus, application to complex systems, like a many redundant d.o.f. robot and multi-agent system, is very difficult. In the previous works in this field, applications were restricted to simple robots and small multi-agent systems, and because of restricted functions of the simple systems that have less redundancy, effectiveness of reinforcement learning is restricted. In our previous works, we had taken these problems into consideration and had proposed new reinforcement learning algorithm, 'Q-learning with dynamic structuring of exploration space based on GA (QDSEGA)'. Effectiveness of QDSEGA for redundant robots has been demonstrated using a 12-legged robot and a 50-link manipulator. However, previous works on QDSEGA were restricted to redundant robots and it was impossible to apply it to multi mobile robots. In this paper, we extend our previous work on QDSEGA by combining a rule-based distributed control and propose a hybrid autonomous control method for multi mobile robots. To demonstrate the effectiveness of the proposed method, simulations of a transportation task by 10 mobile robots are carried out. As a result, effective behaviors have been obtained.</description><subject>AUTONOMOUS</subject><subject>CONTROL</subject><subject>MULTI MOBILE ROBOTS</subject><subject>QDSEGA</subject><subject>REDUNDANT SYSTEM</subject><subject>REINFORCEMENT LEARNING</subject><issn>0169-1864</issn><issn>1568-5535</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2004</creationdate><recordtype>article</recordtype><recordid>eNqFkE1LxDAQhoMouK7-AU89eatm8tkePMjiFyx40XNI0hQiabMmKbr_3i7rbcE9DbzzPMPwInQN-BZA0DvgouGcYkYJkZxSkCdosQvrOeWnaIFBtDU0gp2ji5w_McYNo3KB4GVrku8qPZU4xiFOubJxLCmGqo-pGqZQfDVE44OrUjSx5Et01uuQ3dXfXKKPp8f31Uu9fnt-XT2sa8toW2reSilBSMwJZxwkw8IY2zpGm5ZiaagwmPbQtaTrwUjXt0CwJtoaPS-IpUt0s7-7SfFrcrmowWfrQtCjm99UpIEGCwnHQUwkZYLMINmDNsWck-vVJvlBp60CrHY1qsMaZ-l-L_lxLmTQ3zGFThW9DTH1SY_WZ0X_9Zuj_qGmyk-hv8XWiTI</recordid><startdate>20040101</startdate><enddate>20040101</enddate><creator>Ito, Kazuyuki</creator><creator>Gofuku, Akio</creator><general>Taylor &amp; Francis Group</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7QO</scope><scope>8FD</scope><scope>FR3</scope><scope>P64</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20040101</creationdate><title>Hybrid autonomous control for multi mobile robots</title><author>Ito, Kazuyuki ; Gofuku, Akio</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2004</creationdate><topic>AUTONOMOUS</topic><topic>CONTROL</topic><topic>MULTI MOBILE ROBOTS</topic><topic>QDSEGA</topic><topic>REDUNDANT SYSTEM</topic><topic>REINFORCEMENT LEARNING</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ito, Kazuyuki</creatorcontrib><creatorcontrib>Gofuku, Akio</creatorcontrib><collection>CrossRef</collection><collection>Biotechnology Research Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Advanced robotics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ito, Kazuyuki</au><au>Gofuku, Akio</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hybrid autonomous control for multi mobile robots</atitle><jtitle>Advanced robotics</jtitle><date>2004-01-01</date><risdate>2004</risdate><volume>18</volume><issue>1</issue><spage>83</spage><epage>99</epage><pages>83-99</pages><issn>0169-1864</issn><eissn>1568-5535</eissn><abstract>Reinforcement learning can be an adaptive and flexible control method for autonomous system. It does not need a priori knowledge; behaviors to accomplish given tasks are obtained automatically by repeating trial and error. However, with increasing complexity of the system, the learning costs are increased exponentially. Thus, application to complex systems, like a many redundant d.o.f. robot and multi-agent system, is very difficult. In the previous works in this field, applications were restricted to simple robots and small multi-agent systems, and because of restricted functions of the simple systems that have less redundancy, effectiveness of reinforcement learning is restricted. In our previous works, we had taken these problems into consideration and had proposed new reinforcement learning algorithm, 'Q-learning with dynamic structuring of exploration space based on GA (QDSEGA)'. Effectiveness of QDSEGA for redundant robots has been demonstrated using a 12-legged robot and a 50-link manipulator. However, previous works on QDSEGA were restricted to redundant robots and it was impossible to apply it to multi mobile robots. In this paper, we extend our previous work on QDSEGA by combining a rule-based distributed control and propose a hybrid autonomous control method for multi mobile robots. To demonstrate the effectiveness of the proposed method, simulations of a transportation task by 10 mobile robots are carried out. As a result, effective behaviors have been obtained.</abstract><pub>Taylor &amp; Francis Group</pub><doi>10.1163/156855304322753317</doi><tpages>17</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0169-1864
ispartof Advanced robotics, 2004-01, Vol.18 (1), p.83-99
issn 0169-1864
1568-5535
language eng
recordid cdi_proquest_miscellaneous_20273462
source Taylor and Francis Science and Technology Collection
subjects AUTONOMOUS
CONTROL
MULTI MOBILE ROBOTS
QDSEGA
REDUNDANT SYSTEM
REINFORCEMENT LEARNING
title Hybrid autonomous control for multi mobile robots
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T01%3A09%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_infor&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hybrid%20autonomous%20control%20for%20multi%20mobile%20robots&rft.jtitle=Advanced%20robotics&rft.au=Ito,%20Kazuyuki&rft.date=2004-01-01&rft.volume=18&rft.issue=1&rft.spage=83&rft.epage=99&rft.pages=83-99&rft.issn=0169-1864&rft.eissn=1568-5535&rft_id=info:doi/10.1163/156855304322753317&rft_dat=%3Cproquest_infor%3E20273462%3C/proquest_infor%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c439t-5977716705254517406bbc9e4389307b36b03f1d92df1b7ef9120a2acbab032c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=20273462&rft_id=info:pmid/&rfr_iscdi=true