Loading…
A Graph Neural Network Approach for Caching Performance Optimization in NDN Networks
Named Data Networking (NDN) is a new architecture with in-network caching ability. NDN nodes can cache data packets in their cache store to satisfy further requests. Accurately caching popular content across the network is essential for NDN to reduce the traffic workload and improve network efficien...
Saved in:
Published in: | IEEE access 2022, Vol.10, p.1-1 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Named Data Networking (NDN) is a new architecture with in-network caching ability. NDN nodes can cache data packets in their cache store to satisfy further requests. Accurately caching popular content across the network is essential for NDN to reduce the traffic workload and improve network efficiency. However, traditional caching algorithms are not good at predicting future dynamic content popularity. In our paper, we propose a Graph Neural Network-based (GNN-based) caching strategy to optimize the caching performance in NDN. First, our paper utilizes a Convolutional Neural Network (CNN) to extract time-series features for each NDN node. Secondly, GNN is applied to make content caching probability predictions for each NDN node. Third, at each NDN node, a cache replacement decision is made based on its content caching probability ranking, and content with high caching probability replaces content with low probability. We compare our GNN-based caching strategy with three deep learning-based caching techniques, which are 1D-Convolutional Neural Network (1D-CNN), Long Short-Term Memory Encoder-Decoder (LSTM-ED), and Staked Auto Encoder (SAE), and three classical benchmark caching strategies, which are Least Frequently Used (LFU), Least Recently Used (LRU) and First-in-first-out (FIFO). All caching scenarios are simulated in the Mini-NDN platform and evaluated on the tree and arbitrary network topologies. Empirical results suggest that the GNN-based caching approach can achieve around a 50% higher cache hit ratio and a 30% lower latency in the best case than other deep learning-based caching strategies. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3217236 |