Loading…

FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning

Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks. However, exploration still remains a challenging problem in MARL due to the partial observability of the agents and the exploration space that can g...

Full description

Saved in:
Bibliographic Details
Main Authors: Jo, Yonghyeon, Lee, Sunwoo, Yeom, Junghyuk, Han, Seungyul
Format: Conference Proceeding
Language:English
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 12994
container_issue 12
container_start_page 12985
container_title
container_volume 38
creator Jo, Yonghyeon
Lee, Sunwoo
Yeom, Junghyuk
Han, Seungyul
description Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks. However, exploration still remains a challenging problem in MARL due to the partial observability of the agents and the exploration space that can grow exponentially as the number of agents increases. Firstly, in order to address the scalability issue of the exploration space, we define a formation-based equivalence relation on the exploration space and aim to reduce the search space by exploring only meaningful states in different formations. Then, we propose a novel formation-aware exploration (FoX) framework that encourages partially observable agents to visit the states in diverse formations by guiding them to be well aware of their current formation solely based on their own observations. Numerical results show that the proposed FoX framework significantly outperforms the state-of-the-art MARL algorithms on Google Research Football (GRF) and sparse Starcraft II multi-agent challenge (SMAC) tasks.
doi_str_mv 10.1609/aaai.v38i12.29196
format conference_proceeding
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1609_aaai_v38i12_29196</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1609_aaai_v38i12_29196</sourcerecordid><originalsourceid>FETCH-LOGICAL-c240t-c5b3ab233a3df3e1d8f3addb13617b67ad9577ee3661d3e6f8584adb6cac10b73</originalsourceid><addsrcrecordid>eNotkMtKw0AUhgdRsNQ-gLu8wNScnGQm4y6URgsRQRTcDSczkzKSS5nE29vbtP6b_7L4Fx9jtxCvQcTqjoj8-gtzD8k6UaDEBVskKFOOqcgvjxkyxTNU6pqtxvEjPipVACAXrCyH9_uoHEJHkx96XnxTcNH259AO4bREvo-ePtvJ82Lv-il6cb5vhmBcN7fKUeh9v79hVw21o1v9-5K9ldvXzSOvnh92m6LiJknjiZusRqoTRELboAObN0jW1oACZC0kWZVJ6RwKARadaPIsT8nWwpCBuJa4ZHD-NWEYx-AafQi-o_CrIdYzCz2z0GcW-sQC_wCXKlRL</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning</title><source>Freely Accessible Science Journals - check A-Z of ejournals</source><creator>Jo, Yonghyeon ; Lee, Sunwoo ; Yeom, Junghyuk ; Han, Seungyul</creator><creatorcontrib>Jo, Yonghyeon ; Lee, Sunwoo ; Yeom, Junghyuk ; Han, Seungyul</creatorcontrib><description>Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks. However, exploration still remains a challenging problem in MARL due to the partial observability of the agents and the exploration space that can grow exponentially as the number of agents increases. Firstly, in order to address the scalability issue of the exploration space, we define a formation-based equivalence relation on the exploration space and aim to reduce the search space by exploring only meaningful states in different formations. Then, we propose a novel formation-aware exploration (FoX) framework that encourages partially observable agents to visit the states in diverse formations by guiding them to be well aware of their current formation solely based on their own observations. Numerical results show that the proposed FoX framework significantly outperforms the state-of-the-art MARL algorithms on Google Research Football (GRF) and sparse Starcraft II multi-agent challenge (SMAC) tasks.</description><identifier>ISSN: 2159-5399</identifier><identifier>EISSN: 2374-3468</identifier><identifier>DOI: 10.1609/aaai.v38i12.29196</identifier><language>eng</language><ispartof>Proceedings of the ... AAAI Conference on Artificial Intelligence, 2024, Vol.38 (12), p.12985-12994</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Jo, Yonghyeon</creatorcontrib><creatorcontrib>Lee, Sunwoo</creatorcontrib><creatorcontrib>Yeom, Junghyuk</creatorcontrib><creatorcontrib>Han, Seungyul</creatorcontrib><title>FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning</title><title>Proceedings of the ... AAAI Conference on Artificial Intelligence</title><description>Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks. However, exploration still remains a challenging problem in MARL due to the partial observability of the agents and the exploration space that can grow exponentially as the number of agents increases. Firstly, in order to address the scalability issue of the exploration space, we define a formation-based equivalence relation on the exploration space and aim to reduce the search space by exploring only meaningful states in different formations. Then, we propose a novel formation-aware exploration (FoX) framework that encourages partially observable agents to visit the states in diverse formations by guiding them to be well aware of their current formation solely based on their own observations. Numerical results show that the proposed FoX framework significantly outperforms the state-of-the-art MARL algorithms on Google Research Football (GRF) and sparse Starcraft II multi-agent challenge (SMAC) tasks.</description><issn>2159-5399</issn><issn>2374-3468</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotkMtKw0AUhgdRsNQ-gLu8wNScnGQm4y6URgsRQRTcDSczkzKSS5nE29vbtP6b_7L4Fx9jtxCvQcTqjoj8-gtzD8k6UaDEBVskKFOOqcgvjxkyxTNU6pqtxvEjPipVACAXrCyH9_uoHEJHkx96XnxTcNH259AO4bREvo-ePtvJ82Lv-il6cb5vhmBcN7fKUeh9v79hVw21o1v9-5K9ldvXzSOvnh92m6LiJknjiZusRqoTRELboAObN0jW1oACZC0kWZVJ6RwKARadaPIsT8nWwpCBuJa4ZHD-NWEYx-AafQi-o_CrIdYzCz2z0GcW-sQC_wCXKlRL</recordid><startdate>20240325</startdate><enddate>20240325</enddate><creator>Jo, Yonghyeon</creator><creator>Lee, Sunwoo</creator><creator>Yeom, Junghyuk</creator><creator>Han, Seungyul</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20240325</creationdate><title>FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning</title><author>Jo, Yonghyeon ; Lee, Sunwoo ; Yeom, Junghyuk ; Han, Seungyul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c240t-c5b3ab233a3df3e1d8f3addb13617b67ad9577ee3661d3e6f8584adb6cac10b73</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Jo, Yonghyeon</creatorcontrib><creatorcontrib>Lee, Sunwoo</creatorcontrib><creatorcontrib>Yeom, Junghyuk</creatorcontrib><creatorcontrib>Han, Seungyul</creatorcontrib><collection>CrossRef</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jo, Yonghyeon</au><au>Lee, Sunwoo</au><au>Yeom, Junghyuk</au><au>Han, Seungyul</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning</atitle><btitle>Proceedings of the ... AAAI Conference on Artificial Intelligence</btitle><date>2024-03-25</date><risdate>2024</risdate><volume>38</volume><issue>12</issue><spage>12985</spage><epage>12994</epage><pages>12985-12994</pages><issn>2159-5399</issn><eissn>2374-3468</eissn><abstract>Recently, deep multi-agent reinforcement learning (MARL) has gained significant popularity due to its success in various cooperative multi-agent tasks. However, exploration still remains a challenging problem in MARL due to the partial observability of the agents and the exploration space that can grow exponentially as the number of agents increases. Firstly, in order to address the scalability issue of the exploration space, we define a formation-based equivalence relation on the exploration space and aim to reduce the search space by exploring only meaningful states in different formations. Then, we propose a novel formation-aware exploration (FoX) framework that encourages partially observable agents to visit the states in diverse formations by guiding them to be well aware of their current formation solely based on their own observations. Numerical results show that the proposed FoX framework significantly outperforms the state-of-the-art MARL algorithms on Google Research Football (GRF) and sparse Starcraft II multi-agent challenge (SMAC) tasks.</abstract><doi>10.1609/aaai.v38i12.29196</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2159-5399
ispartof Proceedings of the ... AAAI Conference on Artificial Intelligence, 2024, Vol.38 (12), p.12985-12994
issn 2159-5399
2374-3468
language eng
recordid cdi_crossref_primary_10_1609_aaai_v38i12_29196
source Freely Accessible Science Journals - check A-Z of ejournals
title FoX: Formation-Aware Exploration in Multi-Agent Reinforcement Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-22T03%3A42%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=FoX:%20Formation-Aware%20Exploration%20in%20Multi-Agent%20Reinforcement%20Learning&rft.btitle=Proceedings%20of%20the%20...%20AAAI%20Conference%20on%20Artificial%20Intelligence&rft.au=Jo,%20Yonghyeon&rft.date=2024-03-25&rft.volume=38&rft.issue=12&rft.spage=12985&rft.epage=12994&rft.pages=12985-12994&rft.issn=2159-5399&rft.eissn=2374-3468&rft_id=info:doi/10.1609/aaai.v38i12.29196&rft_dat=%3Ccrossref%3E10_1609_aaai_v38i12_29196%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c240t-c5b3ab233a3df3e1d8f3addb13617b67ad9577ee3661d3e6f8584adb6cac10b73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true