Loading…
SON Coordination in Heterogeneous Networks: A Reinforcement Learning Framework
An important problem of today's mobile network operators is to bring down the capital expenditures and operational expenditures. One strategy is to automate the parameter tuning on the small cells through the so-called self-organizing network (SON) functionalities, such as cell range expansion,...
Saved in:
Published in: | IEEE transactions on wireless communications 2016-09, Vol.15 (9), p.5835-5847 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103 |
---|---|
cites | cdi_FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103 |
container_end_page | 5847 |
container_issue | 9 |
container_start_page | 5835 |
container_title | IEEE transactions on wireless communications |
container_volume | 15 |
creator | Iacoboaiea, Ovidiu-Constantin Sayrac, Berna Ben Jemaa, Sana Bianchi, Pascal |
description | An important problem of today's mobile network operators is to bring down the capital expenditures and operational expenditures. One strategy is to automate the parameter tuning on the small cells through the so-called self-organizing network (SON) functionalities, such as cell range expansion, mobility robustness optimization, or enhanced Inter-Cell Interference Coordination. Having several of these functionalities in the network will surely create conflicts, as, for example, they may try to change the same parameter in the opposite directions. This raises that the need for an SON COordinator (SONCO) meant to arbitrate the parameter change requests of the SON functions, ensuring some degree of fairness. It is difficult to anticipate the impact of accepting several simultaneous requests. In this paper, we provide a SONCO design based on reinforcement learning (RL) as it allows us to learn from previous experiences and improve our future decisions. Typically, RL algorithms are complex. To reduce this complexity, we employ two flavors of function approximation and provide a study-case. Results show that the proposed SONCO design is capable of biasing this fairness among the SON functions by means of weights attributed to the SON functions. Also, we evaluate the tracking capability of the algorithms. |
doi_str_mv | 10.1109/TWC.2016.2571695 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_1830953537</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7476897</ieee_id><sourcerecordid>4223627401</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103</originalsourceid><addsrcrecordid>eNo9kEtLAzEUhYMoWKt7wU3A9dQ8mse4K8VaobSgFZchk9wpU21Skyniv3eGFlf3LL5zLnwI3VIyopSUD-uP6YgRKkdMKCpLcYYGVAhdMDbW533msqBMyUt0lfOWEKqkEAO0fFst8TTG5Jtg2yYG3AQ8hxZS3ECAeMh4Ce1PTJ_5EU_wKzShjsnBDkKLF2BTaMIGz5LdQQ9do4vafmW4Od0hep89rafzYrF6fplOFoVjJW0LpXxdAeHCe6Wt8CVw5iqqfeUdGTvmHfdaEEoktxVYYSUoIQh3ijKhKeFDdH_c3af4fYDcmm08pNC9NFRzUgouuOoocqRcijknqM0-NTubfg0lprdmOmumt2ZO1rrK3bHSAMA_rsZK6lLxP5uoaIY</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1830953537</pqid></control><display><type>article</type><title>SON Coordination in Heterogeneous Networks: A Reinforcement Learning Framework</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Iacoboaiea, Ovidiu-Constantin ; Sayrac, Berna ; Ben Jemaa, Sana ; Bianchi, Pascal</creator><creatorcontrib>Iacoboaiea, Ovidiu-Constantin ; Sayrac, Berna ; Ben Jemaa, Sana ; Bianchi, Pascal</creatorcontrib><description>An important problem of today's mobile network operators is to bring down the capital expenditures and operational expenditures. One strategy is to automate the parameter tuning on the small cells through the so-called self-organizing network (SON) functionalities, such as cell range expansion, mobility robustness optimization, or enhanced Inter-Cell Interference Coordination. Having several of these functionalities in the network will surely create conflicts, as, for example, they may try to change the same parameter in the opposite directions. This raises that the need for an SON COordinator (SONCO) meant to arbitrate the parameter change requests of the SON functions, ensuring some degree of fairness. It is difficult to anticipate the impact of accepting several simultaneous requests. In this paper, we provide a SONCO design based on reinforcement learning (RL) as it allows us to learn from previous experiences and improve our future decisions. Typically, RL algorithms are complex. To reduce this complexity, we employ two flavors of function approximation and provide a study-case. Results show that the proposed SONCO design is capable of biasing this fairness among the SON functions by means of weights attributed to the SON functions. Also, we evaluate the tracking capability of the algorithms.</description><identifier>ISSN: 1536-1276</identifier><identifier>EISSN: 1558-2248</identifier><identifier>DOI: 10.1109/TWC.2016.2571695</identifier><identifier>CODEN: ITWCAX</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithm design and analysis ; Algorithms ; Capital expenditures ; CRE ; eICIC ; function approximation ; Heterogeneous networks ; Learning (artificial intelligence) ; LTE ; Mobile communication ; Mobile computing ; MRO ; Optimization ; reinforcement learning ; SON coordination ; SON instances ; state aggregation ; Telecommunications industry ; Wireless communication</subject><ispartof>IEEE transactions on wireless communications, 2016-09, Vol.15 (9), p.5835-5847</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2016</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103</citedby><cites>FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7476897$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Iacoboaiea, Ovidiu-Constantin</creatorcontrib><creatorcontrib>Sayrac, Berna</creatorcontrib><creatorcontrib>Ben Jemaa, Sana</creatorcontrib><creatorcontrib>Bianchi, Pascal</creatorcontrib><title>SON Coordination in Heterogeneous Networks: A Reinforcement Learning Framework</title><title>IEEE transactions on wireless communications</title><addtitle>TWC</addtitle><description>An important problem of today's mobile network operators is to bring down the capital expenditures and operational expenditures. One strategy is to automate the parameter tuning on the small cells through the so-called self-organizing network (SON) functionalities, such as cell range expansion, mobility robustness optimization, or enhanced Inter-Cell Interference Coordination. Having several of these functionalities in the network will surely create conflicts, as, for example, they may try to change the same parameter in the opposite directions. This raises that the need for an SON COordinator (SONCO) meant to arbitrate the parameter change requests of the SON functions, ensuring some degree of fairness. It is difficult to anticipate the impact of accepting several simultaneous requests. In this paper, we provide a SONCO design based on reinforcement learning (RL) as it allows us to learn from previous experiences and improve our future decisions. Typically, RL algorithms are complex. To reduce this complexity, we employ two flavors of function approximation and provide a study-case. Results show that the proposed SONCO design is capable of biasing this fairness among the SON functions by means of weights attributed to the SON functions. Also, we evaluate the tracking capability of the algorithms.</description><subject>Algorithm design and analysis</subject><subject>Algorithms</subject><subject>Capital expenditures</subject><subject>CRE</subject><subject>eICIC</subject><subject>function approximation</subject><subject>Heterogeneous networks</subject><subject>Learning (artificial intelligence)</subject><subject>LTE</subject><subject>Mobile communication</subject><subject>Mobile computing</subject><subject>MRO</subject><subject>Optimization</subject><subject>reinforcement learning</subject><subject>SON coordination</subject><subject>SON instances</subject><subject>state aggregation</subject><subject>Telecommunications industry</subject><subject>Wireless communication</subject><issn>1536-1276</issn><issn>1558-2248</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2016</creationdate><recordtype>article</recordtype><recordid>eNo9kEtLAzEUhYMoWKt7wU3A9dQ8mse4K8VaobSgFZchk9wpU21Skyniv3eGFlf3LL5zLnwI3VIyopSUD-uP6YgRKkdMKCpLcYYGVAhdMDbW533msqBMyUt0lfOWEKqkEAO0fFst8TTG5Jtg2yYG3AQ8hxZS3ECAeMh4Ce1PTJ_5EU_wKzShjsnBDkKLF2BTaMIGz5LdQQ9do4vafmW4Od0hep89rafzYrF6fplOFoVjJW0LpXxdAeHCe6Wt8CVw5iqqfeUdGTvmHfdaEEoktxVYYSUoIQh3ijKhKeFDdH_c3af4fYDcmm08pNC9NFRzUgouuOoocqRcijknqM0-NTubfg0lprdmOmumt2ZO1rrK3bHSAMA_rsZK6lLxP5uoaIY</recordid><startdate>201609</startdate><enddate>201609</enddate><creator>Iacoboaiea, Ovidiu-Constantin</creator><creator>Sayrac, Berna</creator><creator>Ben Jemaa, Sana</creator><creator>Bianchi, Pascal</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>201609</creationdate><title>SON Coordination in Heterogeneous Networks: A Reinforcement Learning Framework</title><author>Iacoboaiea, Ovidiu-Constantin ; Sayrac, Berna ; Ben Jemaa, Sana ; Bianchi, Pascal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2016</creationdate><topic>Algorithm design and analysis</topic><topic>Algorithms</topic><topic>Capital expenditures</topic><topic>CRE</topic><topic>eICIC</topic><topic>function approximation</topic><topic>Heterogeneous networks</topic><topic>Learning (artificial intelligence)</topic><topic>LTE</topic><topic>Mobile communication</topic><topic>Mobile computing</topic><topic>MRO</topic><topic>Optimization</topic><topic>reinforcement learning</topic><topic>SON coordination</topic><topic>SON instances</topic><topic>state aggregation</topic><topic>Telecommunications industry</topic><topic>Wireless communication</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Iacoboaiea, Ovidiu-Constantin</creatorcontrib><creatorcontrib>Sayrac, Berna</creatorcontrib><creatorcontrib>Ben Jemaa, Sana</creatorcontrib><creatorcontrib>Bianchi, Pascal</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on wireless communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Iacoboaiea, Ovidiu-Constantin</au><au>Sayrac, Berna</au><au>Ben Jemaa, Sana</au><au>Bianchi, Pascal</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SON Coordination in Heterogeneous Networks: A Reinforcement Learning Framework</atitle><jtitle>IEEE transactions on wireless communications</jtitle><stitle>TWC</stitle><date>2016-09</date><risdate>2016</risdate><volume>15</volume><issue>9</issue><spage>5835</spage><epage>5847</epage><pages>5835-5847</pages><issn>1536-1276</issn><eissn>1558-2248</eissn><coden>ITWCAX</coden><abstract>An important problem of today's mobile network operators is to bring down the capital expenditures and operational expenditures. One strategy is to automate the parameter tuning on the small cells through the so-called self-organizing network (SON) functionalities, such as cell range expansion, mobility robustness optimization, or enhanced Inter-Cell Interference Coordination. Having several of these functionalities in the network will surely create conflicts, as, for example, they may try to change the same parameter in the opposite directions. This raises that the need for an SON COordinator (SONCO) meant to arbitrate the parameter change requests of the SON functions, ensuring some degree of fairness. It is difficult to anticipate the impact of accepting several simultaneous requests. In this paper, we provide a SONCO design based on reinforcement learning (RL) as it allows us to learn from previous experiences and improve our future decisions. Typically, RL algorithms are complex. To reduce this complexity, we employ two flavors of function approximation and provide a study-case. Results show that the proposed SONCO design is capable of biasing this fairness among the SON functions by means of weights attributed to the SON functions. Also, we evaluate the tracking capability of the algorithms.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TWC.2016.2571695</doi><tpages>13</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1536-1276 |
ispartof | IEEE transactions on wireless communications, 2016-09, Vol.15 (9), p.5835-5847 |
issn | 1536-1276 1558-2248 |
language | eng |
recordid | cdi_proquest_journals_1830953537 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Algorithm design and analysis Algorithms Capital expenditures CRE eICIC function approximation Heterogeneous networks Learning (artificial intelligence) LTE Mobile communication Mobile computing MRO Optimization reinforcement learning SON coordination SON instances state aggregation Telecommunications industry Wireless communication |
title | SON Coordination in Heterogeneous Networks: A Reinforcement Learning Framework |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T12%3A00%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SON%20Coordination%20in%20Heterogeneous%20Networks:%20A%20Reinforcement%20Learning%20Framework&rft.jtitle=IEEE%20transactions%20on%20wireless%20communications&rft.au=Iacoboaiea,%20Ovidiu-Constantin&rft.date=2016-09&rft.volume=15&rft.issue=9&rft.spage=5835&rft.epage=5847&rft.pages=5835-5847&rft.issn=1536-1276&rft.eissn=1558-2248&rft.coden=ITWCAX&rft_id=info:doi/10.1109/TWC.2016.2571695&rft_dat=%3Cproquest_ieee_%3E4223627401%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c291t-77dfbe035dd78a5d9e32cb18dbdc04c2dc3d8501063abea5a6e75503c71258103%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1830953537&rft_id=info:pmid/&rft_ieee_id=7476897&rfr_iscdi=true |