Loading…
A GNN-based proactive caching strategy in NDN networks
As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching....
Saved in:
Published in: | Peer-to-peer networking and applications 2023-03, Vol.16 (2), p.997-1009 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3 |
---|---|
cites | cdi_FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3 |
container_end_page | 1009 |
container_issue | 2 |
container_start_page | 997 |
container_title | Peer-to-peer networking and applications |
container_volume | 16 |
creator | Hou, Jiacheng Lu, Haoye Nayak, Amiya |
description | As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching. We propose a graph neural network-gain maximization (GNN-GM) cache placement algorithm. First, we use a GNN model to predict users’ ratings of unviewed videos. Second, we consider the total predicted rating of a video as the gain of the cached video. Third, we propose a cache placement algorithm to maximize the caching gain and actively cache videos. Cache replacement is implemented based on the cache gain ranking of videos, with higher cache gain videos replacing lower cache gain videos. We compare GNN-GM with two state-of-the-art caching strategies, namely the NMF-based caching strategy and GNN-CPP. GNN-GM is also compared with two traditional caching strategies, LCE and LRU, LCE and FIFO. We evaluate the five caching strategies using real-world datasets in a tree network topology, a real-world network topology GEANT, and various random topologies. The experimental results show that our caching policy significantly improves cache hit ratio, latency and server load. Notably, GNN-GM achieves a 25% higher cache hit rate, 5% lower latency and 7% lower server load than GNN-CPP in GEANT. |
doi_str_mv | 10.1007/s12083-023-01464-2 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2809983583</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2809983583</sourcerecordid><originalsourceid>FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3</originalsourceid><addsrcrecordid>eNp9kD1PwzAQhi0EEqXwB5gsMRvOn7HHqkBBqsICs-U4TkmBpNgpqP--hiDYGE53w_PenR6EzilcUoDiKlEGmhNguahQgrADNKGGK6KEhMPfWbBjdJLSGkBRLtkEqRlelCWpXAo13sTe-aH9CNg7_9x2K5yG6Iaw2uG2w-V1ibswfPbxJZ2io8a9pnD206fo6fbmcX5Hlg-L-_lsSTxXfCCuAnB1I8Gb2lNZi2BEUchGqabgwAG8N0EzIU2hM1oHoblvoKgcrQ1Qz6foYtybX3vfhjTYdb-NXT5pmQZjNJeaZ4qNlI99SjE0dhPbNxd3loL98mNHPzb7sd9-LMshPoZShrtViH-r_0ntAbaJZjI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2809983583</pqid></control><display><type>article</type><title>A GNN-based proactive caching strategy in NDN networks</title><source>Springer Nature</source><creator>Hou, Jiacheng ; Lu, Haoye ; Nayak, Amiya</creator><creatorcontrib>Hou, Jiacheng ; Lu, Haoye ; Nayak, Amiya</creatorcontrib><description>As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching. We propose a graph neural network-gain maximization (GNN-GM) cache placement algorithm. First, we use a GNN model to predict users’ ratings of unviewed videos. Second, we consider the total predicted rating of a video as the gain of the cached video. Third, we propose a cache placement algorithm to maximize the caching gain and actively cache videos. Cache replacement is implemented based on the cache gain ranking of videos, with higher cache gain videos replacing lower cache gain videos. We compare GNN-GM with two state-of-the-art caching strategies, namely the NMF-based caching strategy and GNN-CPP. GNN-GM is also compared with two traditional caching strategies, LCE and LRU, LCE and FIFO. We evaluate the five caching strategies using real-world datasets in a tree network topology, a real-world network topology GEANT, and various random topologies. The experimental results show that our caching policy significantly improves cache hit ratio, latency and server load. Notably, GNN-GM achieves a 25% higher cache hit rate, 5% lower latency and 7% lower server load than GNN-CPP in GEANT.</description><identifier>ISSN: 1936-6442</identifier><identifier>EISSN: 1936-6450</identifier><identifier>DOI: 10.1007/s12083-023-01464-2</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Caching ; Communications Engineering ; Computer Communication Networks ; Engineering ; Graph neural networks ; Information Systems and Communication Service ; Network latency ; Network topologies ; Networks ; Placement ; Servers ; Signal,Image and Speech Processing ; User experience ; User satisfaction ; Video</subject><ispartof>Peer-to-peer networking and applications, 2023-03, Vol.16 (2), p.997-1009</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3</citedby><cites>FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Hou, Jiacheng</creatorcontrib><creatorcontrib>Lu, Haoye</creatorcontrib><creatorcontrib>Nayak, Amiya</creatorcontrib><title>A GNN-based proactive caching strategy in NDN networks</title><title>Peer-to-peer networking and applications</title><addtitle>Peer-to-Peer Netw. Appl</addtitle><description>As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching. We propose a graph neural network-gain maximization (GNN-GM) cache placement algorithm. First, we use a GNN model to predict users’ ratings of unviewed videos. Second, we consider the total predicted rating of a video as the gain of the cached video. Third, we propose a cache placement algorithm to maximize the caching gain and actively cache videos. Cache replacement is implemented based on the cache gain ranking of videos, with higher cache gain videos replacing lower cache gain videos. We compare GNN-GM with two state-of-the-art caching strategies, namely the NMF-based caching strategy and GNN-CPP. GNN-GM is also compared with two traditional caching strategies, LCE and LRU, LCE and FIFO. We evaluate the five caching strategies using real-world datasets in a tree network topology, a real-world network topology GEANT, and various random topologies. The experimental results show that our caching policy significantly improves cache hit ratio, latency and server load. Notably, GNN-GM achieves a 25% higher cache hit rate, 5% lower latency and 7% lower server load than GNN-CPP in GEANT.</description><subject>Algorithms</subject><subject>Caching</subject><subject>Communications Engineering</subject><subject>Computer Communication Networks</subject><subject>Engineering</subject><subject>Graph neural networks</subject><subject>Information Systems and Communication Service</subject><subject>Network latency</subject><subject>Network topologies</subject><subject>Networks</subject><subject>Placement</subject><subject>Servers</subject><subject>Signal,Image and Speech Processing</subject><subject>User experience</subject><subject>User satisfaction</subject><subject>Video</subject><issn>1936-6442</issn><issn>1936-6450</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kD1PwzAQhi0EEqXwB5gsMRvOn7HHqkBBqsICs-U4TkmBpNgpqP--hiDYGE53w_PenR6EzilcUoDiKlEGmhNguahQgrADNKGGK6KEhMPfWbBjdJLSGkBRLtkEqRlelCWpXAo13sTe-aH9CNg7_9x2K5yG6Iaw2uG2w-V1ibswfPbxJZ2io8a9pnD206fo6fbmcX5Hlg-L-_lsSTxXfCCuAnB1I8Gb2lNZi2BEUchGqabgwAG8N0EzIU2hM1oHoblvoKgcrQ1Qz6foYtybX3vfhjTYdb-NXT5pmQZjNJeaZ4qNlI99SjE0dhPbNxd3loL98mNHPzb7sd9-LMshPoZShrtViH-r_0ntAbaJZjI</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Hou, Jiacheng</creator><creator>Lu, Haoye</creator><creator>Nayak, Amiya</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7XB</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M2O</scope><scope>M2P</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope></search><sort><creationdate>20230301</creationdate><title>A GNN-based proactive caching strategy in NDN networks</title><author>Hou, Jiacheng ; Lu, Haoye ; Nayak, Amiya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Caching</topic><topic>Communications Engineering</topic><topic>Computer Communication Networks</topic><topic>Engineering</topic><topic>Graph neural networks</topic><topic>Information Systems and Communication Service</topic><topic>Network latency</topic><topic>Network topologies</topic><topic>Networks</topic><topic>Placement</topic><topic>Servers</topic><topic>Signal,Image and Speech Processing</topic><topic>User experience</topic><topic>User satisfaction</topic><topic>Video</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hou, Jiacheng</creatorcontrib><creatorcontrib>Lu, Haoye</creatorcontrib><creatorcontrib>Nayak, Amiya</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>ProQuest research library</collection><collection>ProQuest Science Journals</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies & Aerospace Database</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><jtitle>Peer-to-peer networking and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hou, Jiacheng</au><au>Lu, Haoye</au><au>Nayak, Amiya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A GNN-based proactive caching strategy in NDN networks</atitle><jtitle>Peer-to-peer networking and applications</jtitle><stitle>Peer-to-Peer Netw. Appl</stitle><date>2023-03-01</date><risdate>2023</risdate><volume>16</volume><issue>2</issue><spage>997</spage><epage>1009</epage><pages>997-1009</pages><issn>1936-6442</issn><eissn>1936-6450</eissn><abstract>As people spend more time watching and sharing videos online, it is critical to provide users with a satisfactory quality of experience (QoE). Leveraging the in-network caching and named-based routing features in Named Data Networks (NDNs), our paper aims to improve user experience through caching. We propose a graph neural network-gain maximization (GNN-GM) cache placement algorithm. First, we use a GNN model to predict users’ ratings of unviewed videos. Second, we consider the total predicted rating of a video as the gain of the cached video. Third, we propose a cache placement algorithm to maximize the caching gain and actively cache videos. Cache replacement is implemented based on the cache gain ranking of videos, with higher cache gain videos replacing lower cache gain videos. We compare GNN-GM with two state-of-the-art caching strategies, namely the NMF-based caching strategy and GNN-CPP. GNN-GM is also compared with two traditional caching strategies, LCE and LRU, LCE and FIFO. We evaluate the five caching strategies using real-world datasets in a tree network topology, a real-world network topology GEANT, and various random topologies. The experimental results show that our caching policy significantly improves cache hit ratio, latency and server load. Notably, GNN-GM achieves a 25% higher cache hit rate, 5% lower latency and 7% lower server load than GNN-CPP in GEANT.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s12083-023-01464-2</doi><tpages>13</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1936-6442 |
ispartof | Peer-to-peer networking and applications, 2023-03, Vol.16 (2), p.997-1009 |
issn | 1936-6442 1936-6450 |
language | eng |
recordid | cdi_proquest_journals_2809983583 |
source | Springer Nature |
subjects | Algorithms Caching Communications Engineering Computer Communication Networks Engineering Graph neural networks Information Systems and Communication Service Network latency Network topologies Networks Placement Servers Signal,Image and Speech Processing User experience User satisfaction Video |
title | A GNN-based proactive caching strategy in NDN networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-12T14%3A34%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20GNN-based%20proactive%20caching%20strategy%20in%20NDN%20networks&rft.jtitle=Peer-to-peer%20networking%20and%20applications&rft.au=Hou,%20Jiacheng&rft.date=2023-03-01&rft.volume=16&rft.issue=2&rft.spage=997&rft.epage=1009&rft.pages=997-1009&rft.issn=1936-6442&rft.eissn=1936-6450&rft_id=info:doi/10.1007/s12083-023-01464-2&rft_dat=%3Cproquest_cross%3E2809983583%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c363t-ab00adf50c9dc15d4e94775f66f730300cc9e8245978b00de483cf07ba1d901c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2809983583&rft_id=info:pmid/&rfr_iscdi=true |