Loading…
VGN: Value Decomposition With Graph Attention Networks for Multiagent Reinforcement Learning
Although value decomposition networks and the follow on value-based studies factorizes the joint reward function to individual reward functions for a kind of cooperative multiagent reinforcement problem, in which each agent has its local observation and shares a joint reward signal, most of the prev...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2024-01, Vol.35 (1), p.182-195 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Although value decomposition networks and the follow on value-based studies factorizes the joint reward function to individual reward functions for a kind of cooperative multiagent reinforcement problem, in which each agent has its local observation and shares a joint reward signal, most of the previous efforts, however, ignored the graphical information between agents. In this article, a new value decomposition with graph attention network (VGN) method is developed to solve the value functions by introducing the dynamical relationships between agents. It is pointed out that the decomposition factor of an agent in our approach can be influenced by the reward signals of all the related agents and two graphical neural network-based algorithms (VGN-Linear and VGN-Nonlinear) are designed to solve the value functions of each agent. It can be proved theoretically that the present methods satisfy the factorizable condition in the centralized training process. The performance of the present methods is evaluated on the StarCraft Multiagent Challenge (SMAC) benchmark. Experiment results show that our method outperforms the state-of-the-art value-based multiagent reinforcement algorithms, especially when the tasks are with very hard level and challenging for existing methods. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2022.3172572 |