Loading…
Learning MAC Protocols in HetNets: A Cooperative Multi-Agent Deep Reinforcement Learning Approach
Traditional human-designed medium access control (MAC) protocols cannot tackle the heterogeneous requirements of the future 6G wireless networks. Reinforcement learning (RL) algorithms have been proposed, in which base stations (BSs) and user equipment's (UEs) act as agents to automatically lea...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Traditional human-designed medium access control (MAC) protocols cannot tackle the heterogeneous requirements of the future 6G wireless networks. Reinforcement learning (RL) algorithms have been proposed, in which base stations (BSs) and user equipment's (UEs) act as agents to automatically learn the MAC protocols to satisfy the stringent quality of service (QoS) requirements of 6G networks. However, existing RL techniques result in a generalization issue where agents fail to identify and explore useful information in a sparse wireless environment. To tackle this challenge, we propose a cooperative multi-agent exploration (CMAE) framework in which the network state space is projected into a low-dimensional space instead of learning a policy in a high-dimensional space. Consequently, the agents start exploring from low-dimensional state space to high-dimensional space to learn the abstracted information from the wireless environment. In the proposed framework, the nodes and BSs collaborate to explore the under-explored wireless network states to jointly learn the channel access and signalling policy. Simulation results show that the proposed CMAE framework outperforms traditional baseline schemes in terms of good put and collision rate and has better generalization capabilities. |
---|---|
ISSN: | 1558-2612 |
DOI: | 10.1109/WCNC57260.2024.10571321 |