Loading…

Elastic Network Cache Control Using Deep Reinforcement Learning

Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the...

Full description

Saved in:
Bibliographic Details
Main Authors: Cho, Chunglae, Shin, Seungjae, Jeon, Hongseok, Yoon, Seunghyun
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 1008
container_issue
container_start_page 1006
container_title
container_volume
creator Cho, Chunglae
Shin, Seungjae
Jeon, Hongseok
Yoon, Seunghyun
description Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the one hand and minimize the cost of leasing the cache storage on the other hand. This paper presents a caching algorithm using deep reinforcement learning (DRL) that controls the caching policy with the content time-to-live (TTL) values and elastically adjusts the cache size according to a dynamically changing environment to maximize the utility-minus-cost objective. We show that, under non-stationary traffic scenarios, our DRL-based approach outperforms the conventional algorithms known to be optimal under stationary traffic scenarios.
doi_str_mv 10.1109/ICTC55196.2022.9952648
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9952648</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9952648</ieee_id><sourcerecordid>9952648</sourcerecordid><originalsourceid>FETCH-LOGICAL-i118t-65f0ac654f7d08c5985e0a42eace417f133f9fb7839968c070e9f232c9321dfb3</originalsourceid><addsrcrecordid>eNotj9FKwzAUhqMgOLc9gSB5gdacpElzrkTi1EFRGNv1yLITjXbtSAvi2ztwV__FBx_fz9gdiBJA4P3SrZ3WgKaUQsoSUUtT2Qt2A8boClFhfckmEowsQFZwzebD8CWEUGAtKjthD4vWD2MK_I3Gnz5_c-fDJ3HXd2PuW74ZUvfBn4iOfEWpi30OdKBu5A353J3YjF1F3w40P--UbZ4Xa_daNO8vS_fYFAnAjoXRUfhwSor1Xtig0WoSvpLkA1VQR1AqYtzVViEaG0QtCKNUMqCSsI87NWW3_95ERNtjTgeff7fnv-oPXhdJvw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Elastic Network Cache Control Using Deep Reinforcement Learning</title><source>IEEE Xplore All Conference Series</source><creator>Cho, Chunglae ; Shin, Seungjae ; Jeon, Hongseok ; Yoon, Seunghyun</creator><creatorcontrib>Cho, Chunglae ; Shin, Seungjae ; Jeon, Hongseok ; Yoon, Seunghyun</creatorcontrib><description>Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the one hand and minimize the cost of leasing the cache storage on the other hand. This paper presents a caching algorithm using deep reinforcement learning (DRL) that controls the caching policy with the content time-to-live (TTL) values and elastically adjusts the cache size according to a dynamically changing environment to maximize the utility-minus-cost objective. We show that, under non-stationary traffic scenarios, our DRL-based approach outperforms the conventional algorithms known to be optimal under stationary traffic scenarios.</description><identifier>EISSN: 2162-1241</identifier><identifier>EISBN: 1665499397</identifier><identifier>EISBN: 9781665499392</identifier><identifier>DOI: 10.1109/ICTC55196.2022.9952648</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cache storage ; Costs ; Deep learning ; deep reinforcement learning ; elastic caching ; Heuristic algorithms ; Information and communication technology ; non-stationary traffic ; Reinforcement learning ; utility-minus-cost maximization ; Virtualization</subject><ispartof>2022 13th International Conference on Information and Communication Technology Convergence (ICTC), 2022, p.1006-1008</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9952648$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9952648$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Cho, Chunglae</creatorcontrib><creatorcontrib>Shin, Seungjae</creatorcontrib><creatorcontrib>Jeon, Hongseok</creatorcontrib><creatorcontrib>Yoon, Seunghyun</creatorcontrib><title>Elastic Network Cache Control Using Deep Reinforcement Learning</title><title>2022 13th International Conference on Information and Communication Technology Convergence (ICTC)</title><addtitle>ICTC</addtitle><description>Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the one hand and minimize the cost of leasing the cache storage on the other hand. This paper presents a caching algorithm using deep reinforcement learning (DRL) that controls the caching policy with the content time-to-live (TTL) values and elastically adjusts the cache size according to a dynamically changing environment to maximize the utility-minus-cost objective. We show that, under non-stationary traffic scenarios, our DRL-based approach outperforms the conventional algorithms known to be optimal under stationary traffic scenarios.</description><subject>Cache storage</subject><subject>Costs</subject><subject>Deep learning</subject><subject>deep reinforcement learning</subject><subject>elastic caching</subject><subject>Heuristic algorithms</subject><subject>Information and communication technology</subject><subject>non-stationary traffic</subject><subject>Reinforcement learning</subject><subject>utility-minus-cost maximization</subject><subject>Virtualization</subject><issn>2162-1241</issn><isbn>1665499397</isbn><isbn>9781665499392</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2022</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotj9FKwzAUhqMgOLc9gSB5gdacpElzrkTi1EFRGNv1yLITjXbtSAvi2ztwV__FBx_fz9gdiBJA4P3SrZ3WgKaUQsoSUUtT2Qt2A8boClFhfckmEowsQFZwzebD8CWEUGAtKjthD4vWD2MK_I3Gnz5_c-fDJ3HXd2PuW74ZUvfBn4iOfEWpi30OdKBu5A353J3YjF1F3w40P--UbZ4Xa_daNO8vS_fYFAnAjoXRUfhwSor1Xtig0WoSvpLkA1VQR1AqYtzVViEaG0QtCKNUMqCSsI87NWW3_95ERNtjTgeff7fnv-oPXhdJvw</recordid><startdate>20221019</startdate><enddate>20221019</enddate><creator>Cho, Chunglae</creator><creator>Shin, Seungjae</creator><creator>Jeon, Hongseok</creator><creator>Yoon, Seunghyun</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20221019</creationdate><title>Elastic Network Cache Control Using Deep Reinforcement Learning</title><author>Cho, Chunglae ; Shin, Seungjae ; Jeon, Hongseok ; Yoon, Seunghyun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i118t-65f0ac654f7d08c5985e0a42eace417f133f9fb7839968c070e9f232c9321dfb3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Cache storage</topic><topic>Costs</topic><topic>Deep learning</topic><topic>deep reinforcement learning</topic><topic>elastic caching</topic><topic>Heuristic algorithms</topic><topic>Information and communication technology</topic><topic>non-stationary traffic</topic><topic>Reinforcement learning</topic><topic>utility-minus-cost maximization</topic><topic>Virtualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho, Chunglae</creatorcontrib><creatorcontrib>Shin, Seungjae</creatorcontrib><creatorcontrib>Jeon, Hongseok</creatorcontrib><creatorcontrib>Yoon, Seunghyun</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Cho, Chunglae</au><au>Shin, Seungjae</au><au>Jeon, Hongseok</au><au>Yoon, Seunghyun</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Elastic Network Cache Control Using Deep Reinforcement Learning</atitle><btitle>2022 13th International Conference on Information and Communication Technology Convergence (ICTC)</btitle><stitle>ICTC</stitle><date>2022-10-19</date><risdate>2022</risdate><spage>1006</spage><epage>1008</epage><pages>1006-1008</pages><eissn>2162-1241</eissn><eisbn>1665499397</eisbn><eisbn>9781665499392</eisbn><abstract>Thanks to the development of virtualization technology, content service providers can flexibly lease virtualized resources from infrastructure service providers when they deploy the cache nodes in edge networks. As a result, they have two orthogonal objectives: to maximize the caching utility on the one hand and minimize the cost of leasing the cache storage on the other hand. This paper presents a caching algorithm using deep reinforcement learning (DRL) that controls the caching policy with the content time-to-live (TTL) values and elastically adjusts the cache size according to a dynamically changing environment to maximize the utility-minus-cost objective. We show that, under non-stationary traffic scenarios, our DRL-based approach outperforms the conventional algorithms known to be optimal under stationary traffic scenarios.</abstract><pub>IEEE</pub><doi>10.1109/ICTC55196.2022.9952648</doi><tpages>3</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2162-1241
ispartof 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), 2022, p.1006-1008
issn 2162-1241
language eng
recordid cdi_ieee_primary_9952648
source IEEE Xplore All Conference Series
subjects Cache storage
Costs
Deep learning
deep reinforcement learning
elastic caching
Heuristic algorithms
Information and communication technology
non-stationary traffic
Reinforcement learning
utility-minus-cost maximization
Virtualization
title Elastic Network Cache Control Using Deep Reinforcement Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T21%3A57%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Elastic%20Network%20Cache%20Control%20Using%20Deep%20Reinforcement%20Learning&rft.btitle=2022%2013th%20International%20Conference%20on%20Information%20and%20Communication%20Technology%20Convergence%20(ICTC)&rft.au=Cho,%20Chunglae&rft.date=2022-10-19&rft.spage=1006&rft.epage=1008&rft.pages=1006-1008&rft.eissn=2162-1241&rft_id=info:doi/10.1109/ICTC55196.2022.9952648&rft.eisbn=1665499397&rft.eisbn_list=9781665499392&rft_dat=%3Cieee_CHZPO%3E9952648%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i118t-65f0ac654f7d08c5985e0a42eace417f133f9fb7839968c070e9f232c9321dfb3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9952648&rfr_iscdi=true