Loading…
Spatial-Temporal Graph Attention-Based Multi-Agent Reinforcement Learning in Cooperative Edge Caching
With the increasing number of users connecting to the internet and the number of devices each user owns, the internet is experiencing an unprecedented traffic demand. Providing users with a satisfying surfing experience while consuming minimal transmission costs is critical but also challenging. To...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | With the increasing number of users connecting to the internet and the number of devices each user owns, the internet is experiencing an unprecedented traffic demand. Providing users with a satisfying surfing experience while consuming minimal transmission costs is critical but also challenging. To cope with these difficulties, edge caching is emerging. Edge caching allows Base Stations (BSs) to cache files, then some of the users' requests can be satisfied by the edge rather than the cloud, where the latter results in higher latency and transmission costs. However, state-of-the-art edge caching strategies either assume file popularity is known in advance or lack of cooperation between neighbouring BSs. This paper proposes a multi-agent spatial-temporal graph attention neural network caching strategy, named "Double Deep Graph Attention Recurrent Q Network" (DDGARQN). The graph attention block can extract spatial dependencies among neighbouring BSs, and the temporal block can capture user preferences dynamics on each Base Station (BS) at each time instant. Comprehensive experimental results show that DDGARQN can achieve a 66% higher cache hit ratio, 6.9% lower latency and 6.5% lower link load than the state-of-the-art caching strategy at best. |
---|---|
ISSN: | 1938-1883 |
DOI: | 10.1109/ICC45041.2023.10278575 |