Loading…
VAOS: Enhancing the stability of cooperative multi-agent policy learning
Multi-agent value decomposition (MAVD) algorithms have made remarkable achievements in applications of multi-agent reinforcement learning (MARL). However, overestimation errors in MAVD algorithms generally lead to unstable phenomena such as severe oscillation and performance degradation in their lea...
Saved in:
Published in: | Knowledge-based systems 2024-11, Vol.304, p.112474, Article 112474 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Multi-agent value decomposition (MAVD) algorithms have made remarkable achievements in applications of multi-agent reinforcement learning (MARL). However, overestimation errors in MAVD algorithms generally lead to unstable phenomena such as severe oscillation and performance degradation in their learning processes. In this work, we propose a method to integrate the advantages of value averaging and operator switching (VAOS) to enhance MAVD algorithms’ learning stability. In particular, we reduce the variance of the target approximate error by averaging the estimate values of the target network. Meanwhile, we design a operator switching method to fully combine the optimal policy learning ability of the Max operator and the superior stability of the Mellowmax operator. Moreover, we theoretically prove the performance of VAOS in reducing the overestimation error. Exhaustive experimental results show that (1) Comparing to the current popular value decomposition algorithms such as QMIX, VAOS can markedly enhance the learning stability; and (2) The performance of VAOS is superior to other advanced algorithms such as regularized softmax (RES) algorithm in reducing overestimation error. |
---|---|
ISSN: | 0950-7051 |
DOI: | 10.1016/j.knosys.2024.112474 |