Loading…
Multi-Agent Deep-Q Network-Based Cache Replacement Policy for Content Delivery Networks
In today’s digital landscape, content delivery networks (CDNs) play a pivotal role in ensuring rapid and seamless access to online content across the globe. By strategically deploying a network of edge servers in close proximity to users, CDNs optimize the delivery of digital content. One key mechan...
Saved in:
Published in: | Future internet 2024-08, Vol.16 (8), p.292 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In today’s digital landscape, content delivery networks (CDNs) play a pivotal role in ensuring rapid and seamless access to online content across the globe. By strategically deploying a network of edge servers in close proximity to users, CDNs optimize the delivery of digital content. One key mechanism involves caching frequently requested content at these edge servers, which not only alleviates the load on the source CDN server but also enhances the overall user experience. However, the exponential growth in user demands has led to increased network congestion, subsequently reducing the cache hit ratio within CDNs. To address this reduction, this paper presents an innovative approach for efficient cache replacement in a dynamic caching environment while maximizing the cache hit ratio via a cooperative cache replacement policy based on reinforcement learning. This paper presents an innovative approach to enhance the performance of CDNs through an advanced cache replacement policy based on reinforcement learning. The proposed system model depicts a mesh network of CDNs, with edge servers catering to user requests, and a main source CDN server. The cache replacement problem is initially modeled as a Markov decision process, and it is extended to a multi-agent reinforcement learning problem. We propose a cooperative cache replacement algorithm based on a multi-agent deep-Q network (MADQN), where the edge servers cooperatively learn to efficiently replace the cached content to maximize the cache hit ratio. Experimental results are presented to validate the performance of our proposed approach. Notably, our MADQN policy exhibits superior cache hit ratios and lower average delays compared to traditional caching policies. |
---|---|
ISSN: | 1999-5903 1999-5903 |
DOI: | 10.3390/fi16080292 |