Loading…

A GNN-based proactive caching strategy in NDN networks

As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching....

Full description

Saved in:
Bibliographic Details
Published in:Peer-to-peer networking and applications 2023-03, Vol.16 (2), p.997-1009
Main Authors: Hou, Jiacheng, Lu, Haoye, Nayak, Amiya
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching. We propose a graph neural network-gain maximization (GNN-GM) cache placement algorithm. First, we use a GNN model to predict users’ ratings of unviewed videos. Second, we consider the total predicted rating of a video as the gain of the cached video. Third, we propose a cache placement algorithm to maximize the caching gain and actively cache videos. Cache replacement is implemented based on the cache gain ranking of videos, with higher cache gain videos replacing lower cache gain videos. We compare GNN-GM with two state-of-the-art caching strategies, namely the NMF-based caching strategy and GNN-CPP. GNN-GM is also compared with two traditional caching strategies, LCE and LRU, LCE and FIFO. We evaluate the five caching strategies using real-world datasets in a tree network topology, a real-world network topology GEANT, and various random topologies. The experimental results show that our caching policy significantly improves cache hit ratio, latency and server load. Notably, GNN-GM achieves a 25% higher cache hit rate, 5% lower latency and 7% lower server load than GNN-CPP in GEANT.
ISSN:1936-6442
1936-6450
DOI:10.1007/s12083-023-01464-2