Loading…
A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks
Progressing towards a new era of Artificial Intelligence (AI) - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia. Federated Learning (FL) has emerged as a key privacy preserving decentralized AI technique. Despite efforts cur...
Saved in:
Published in: | IEEE transactions on green communications and networking 2024-12, Vol.8 (4), p.1862-1874 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33 |
---|---|
cites | cdi_FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33 |
container_end_page | 1874 |
container_issue | 4 |
container_start_page | 1862 |
container_title | IEEE transactions on green communications and networking |
container_volume | 8 |
creator | Koursioumpas, Nikolaos Magoula, Lina Petropouleas, Nikolaos Thanopoulos, Alexandros-Ioannis Panagea, Theodora Alonistioti, Nancy Gutierrez-Estevez, M. A. Khalili, Ramin |
description | Progressing towards a new era of Artificial Intelligence (AI) - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia. Federated Learning (FL) has emerged as a key privacy preserving decentralized AI technique. Despite efforts currently being made in FL, its environmental impact is still an open problem. Targeting the minimization of the overall energy consumption of an FL process, we propose the orchestration of computational and communication resources of the involved devices to minimize the total energy required, while guaranteeing a certain performance of the model. To this end, we propose a Soft Actor Critic Deep Reinforcement Learning (DRL) solution, where a penalty function is introduced during training, penalizing the strategies that violate the constraints of the environment, and contributing towards a safe RL process. A device level synchronization method, along with a computationally cost effective FL environment are proposed, with the goal of further reducing the energy consumption and communication overhead. Evaluation results show the effectiveness and robustness of the proposed scheme compared to four state-of-the-art baseline solutions on different network environments and FL architectures, achieving a decrease of up to 94% in the total energy consumption. |
doi_str_mv | 10.1109/TGCN.2024.3372695 |
format | article |
fullrecord | <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10458416</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10458416</ieee_id><sourcerecordid>10_1109_TGCN_2024_3372695</sourcerecordid><originalsourceid>FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33</originalsourceid><addsrcrecordid>eNpNkM1Kw0AUhQdRsNQ-gOBiXiB1_pNZlthWoVTQisswmdypo80kTKLSt7ehBbu6B-75zuJD6JaSKaVE32-W-XrKCBNTzlOmtLxAIyZSnjBByOVZvkaTrvskhDAtqdJ8hH5m-NU4wA8ALX4BH1wTLdQQerwCE4MPWzxr29gY-4EPPzwPELd7PHfOWz_UFlBBND1U_4AP-N1H2EHX4byp6-_grel9E_Aa-t8mfnU36MqZXQeT0x2jt8V8kz8mq-flUz5bJZYp1SdKq0xKXslKZYqqkmUlQMlTx4nRzBkndCqIJNqVFQWglpSCEWu0tEBpyfkY0eOujU3XRXBFG31t4r6gpBjcFYO7YnBXnNwdmLsj4wHgrC9kJqjif1FvbDc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Koursioumpas, Nikolaos ; Magoula, Lina ; Petropouleas, Nikolaos ; Thanopoulos, Alexandros-Ioannis ; Panagea, Theodora ; Alonistioti, Nancy ; Gutierrez-Estevez, M. A. ; Khalili, Ramin</creator><creatorcontrib>Koursioumpas, Nikolaos ; Magoula, Lina ; Petropouleas, Nikolaos ; Thanopoulos, Alexandros-Ioannis ; Panagea, Theodora ; Alonistioti, Nancy ; Gutierrez-Estevez, M. A. ; Khalili, Ramin</creatorcontrib><description>Progressing towards a new era of Artificial Intelligence (AI) - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia. Federated Learning (FL) has emerged as a key privacy preserving decentralized AI technique. Despite efforts currently being made in FL, its environmental impact is still an open problem. Targeting the minimization of the overall energy consumption of an FL process, we propose the orchestration of computational and communication resources of the involved devices to minimize the total energy required, while guaranteeing a certain performance of the model. To this end, we propose a Soft Actor Critic Deep Reinforcement Learning (DRL) solution, where a penalty function is introduced during training, penalizing the strategies that violate the constraints of the environment, and contributing towards a safe RL process. A device level synchronization method, along with a computationally cost effective FL environment are proposed, with the goal of further reducing the energy consumption and communication overhead. Evaluation results show the effectiveness and robustness of the proposed scheme compared to four state-of-the-art baseline solutions on different network environments and FL architectures, achieving a decrease of up to 94% in the total energy consumption.</description><identifier>ISSN: 2473-2400</identifier><identifier>EISSN: 2473-2400</identifier><identifier>DOI: 10.1109/TGCN.2024.3372695</identifier><identifier>CODEN: ITGCBM</identifier><language>eng</language><publisher>IEEE</publisher><subject>Beyond 5G ; Computational modeling ; Costs ; Energy consumption ; Energy efficiency ; federated learning ; Performance evaluation ; reinforcement learning ; Resource management ; Training</subject><ispartof>IEEE transactions on green communications and networking, 2024-12, Vol.8 (4), p.1862-1874</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33</citedby><cites>FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33</cites><orcidid>0000-0001-8819-5620 ; 0000-0002-9730-4056 ; 0000-0002-0143-6848 ; 0000-0003-2163-9615 ; 0009-0000-1133-7635 ; 0009-0000-4454-5502</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10458416$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Koursioumpas, Nikolaos</creatorcontrib><creatorcontrib>Magoula, Lina</creatorcontrib><creatorcontrib>Petropouleas, Nikolaos</creatorcontrib><creatorcontrib>Thanopoulos, Alexandros-Ioannis</creatorcontrib><creatorcontrib>Panagea, Theodora</creatorcontrib><creatorcontrib>Alonistioti, Nancy</creatorcontrib><creatorcontrib>Gutierrez-Estevez, M. A.</creatorcontrib><creatorcontrib>Khalili, Ramin</creatorcontrib><title>A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks</title><title>IEEE transactions on green communications and networking</title><addtitle>TGCN</addtitle><description>Progressing towards a new era of Artificial Intelligence (AI) - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia. Federated Learning (FL) has emerged as a key privacy preserving decentralized AI technique. Despite efforts currently being made in FL, its environmental impact is still an open problem. Targeting the minimization of the overall energy consumption of an FL process, we propose the orchestration of computational and communication resources of the involved devices to minimize the total energy required, while guaranteeing a certain performance of the model. To this end, we propose a Soft Actor Critic Deep Reinforcement Learning (DRL) solution, where a penalty function is introduced during training, penalizing the strategies that violate the constraints of the environment, and contributing towards a safe RL process. A device level synchronization method, along with a computationally cost effective FL environment are proposed, with the goal of further reducing the energy consumption and communication overhead. Evaluation results show the effectiveness and robustness of the proposed scheme compared to four state-of-the-art baseline solutions on different network environments and FL architectures, achieving a decrease of up to 94% in the total energy consumption.</description><subject>Beyond 5G</subject><subject>Computational modeling</subject><subject>Costs</subject><subject>Energy consumption</subject><subject>Energy efficiency</subject><subject>federated learning</subject><subject>Performance evaluation</subject><subject>reinforcement learning</subject><subject>Resource management</subject><subject>Training</subject><issn>2473-2400</issn><issn>2473-2400</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkM1Kw0AUhQdRsNQ-gOBiXiB1_pNZlthWoVTQisswmdypo80kTKLSt7ehBbu6B-75zuJD6JaSKaVE32-W-XrKCBNTzlOmtLxAIyZSnjBByOVZvkaTrvskhDAtqdJ8hH5m-NU4wA8ALX4BH1wTLdQQerwCE4MPWzxr29gY-4EPPzwPELd7PHfOWz_UFlBBND1U_4AP-N1H2EHX4byp6-_grel9E_Aa-t8mfnU36MqZXQeT0x2jt8V8kz8mq-flUz5bJZYp1SdKq0xKXslKZYqqkmUlQMlTx4nRzBkndCqIJNqVFQWglpSCEWu0tEBpyfkY0eOujU3XRXBFG31t4r6gpBjcFYO7YnBXnNwdmLsj4wHgrC9kJqjif1FvbDc</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Koursioumpas, Nikolaos</creator><creator>Magoula, Lina</creator><creator>Petropouleas, Nikolaos</creator><creator>Thanopoulos, Alexandros-Ioannis</creator><creator>Panagea, Theodora</creator><creator>Alonistioti, Nancy</creator><creator>Gutierrez-Estevez, M. A.</creator><creator>Khalili, Ramin</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-8819-5620</orcidid><orcidid>https://orcid.org/0000-0002-9730-4056</orcidid><orcidid>https://orcid.org/0000-0002-0143-6848</orcidid><orcidid>https://orcid.org/0000-0003-2163-9615</orcidid><orcidid>https://orcid.org/0009-0000-1133-7635</orcidid><orcidid>https://orcid.org/0009-0000-4454-5502</orcidid></search><sort><creationdate>20241201</creationdate><title>A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks</title><author>Koursioumpas, Nikolaos ; Magoula, Lina ; Petropouleas, Nikolaos ; Thanopoulos, Alexandros-Ioannis ; Panagea, Theodora ; Alonistioti, Nancy ; Gutierrez-Estevez, M. A. ; Khalili, Ramin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Beyond 5G</topic><topic>Computational modeling</topic><topic>Costs</topic><topic>Energy consumption</topic><topic>Energy efficiency</topic><topic>federated learning</topic><topic>Performance evaluation</topic><topic>reinforcement learning</topic><topic>Resource management</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Koursioumpas, Nikolaos</creatorcontrib><creatorcontrib>Magoula, Lina</creatorcontrib><creatorcontrib>Petropouleas, Nikolaos</creatorcontrib><creatorcontrib>Thanopoulos, Alexandros-Ioannis</creatorcontrib><creatorcontrib>Panagea, Theodora</creatorcontrib><creatorcontrib>Alonistioti, Nancy</creatorcontrib><creatorcontrib>Gutierrez-Estevez, M. A.</creatorcontrib><creatorcontrib>Khalili, Ramin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><jtitle>IEEE transactions on green communications and networking</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Koursioumpas, Nikolaos</au><au>Magoula, Lina</au><au>Petropouleas, Nikolaos</au><au>Thanopoulos, Alexandros-Ioannis</au><au>Panagea, Theodora</au><au>Alonistioti, Nancy</au><au>Gutierrez-Estevez, M. A.</au><au>Khalili, Ramin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks</atitle><jtitle>IEEE transactions on green communications and networking</jtitle><stitle>TGCN</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>8</volume><issue>4</issue><spage>1862</spage><epage>1874</epage><pages>1862-1874</pages><issn>2473-2400</issn><eissn>2473-2400</eissn><coden>ITGCBM</coden><abstract>Progressing towards a new era of Artificial Intelligence (AI) - enabled wireless networks, concerns regarding the environmental impact of AI have been raised both in industry and academia. Federated Learning (FL) has emerged as a key privacy preserving decentralized AI technique. Despite efforts currently being made in FL, its environmental impact is still an open problem. Targeting the minimization of the overall energy consumption of an FL process, we propose the orchestration of computational and communication resources of the involved devices to minimize the total energy required, while guaranteeing a certain performance of the model. To this end, we propose a Soft Actor Critic Deep Reinforcement Learning (DRL) solution, where a penalty function is introduced during training, penalizing the strategies that violate the constraints of the environment, and contributing towards a safe RL process. A device level synchronization method, along with a computationally cost effective FL environment are proposed, with the goal of further reducing the energy consumption and communication overhead. Evaluation results show the effectiveness and robustness of the proposed scheme compared to four state-of-the-art baseline solutions on different network environments and FL architectures, achieving a decrease of up to 94% in the total energy consumption.</abstract><pub>IEEE</pub><doi>10.1109/TGCN.2024.3372695</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-8819-5620</orcidid><orcidid>https://orcid.org/0000-0002-9730-4056</orcidid><orcidid>https://orcid.org/0000-0002-0143-6848</orcidid><orcidid>https://orcid.org/0000-0003-2163-9615</orcidid><orcidid>https://orcid.org/0009-0000-1133-7635</orcidid><orcidid>https://orcid.org/0009-0000-4454-5502</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2473-2400 |
ispartof | IEEE transactions on green communications and networking, 2024-12, Vol.8 (4), p.1862-1874 |
issn | 2473-2400 2473-2400 |
language | eng |
recordid | cdi_ieee_primary_10458416 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Beyond 5G Computational modeling Costs Energy consumption Energy efficiency federated learning Performance evaluation reinforcement learning Resource management Training |
title | A Safe Deep Reinforcement Learning Approach for Energy Efficient Federated Learning in Wireless Communication Networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T03%3A08%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Safe%20Deep%20Reinforcement%20Learning%20Approach%20for%20Energy%20Efficient%20Federated%20Learning%20in%20Wireless%20Communication%20Networks&rft.jtitle=IEEE%20transactions%20on%20green%20communications%20and%20networking&rft.au=Koursioumpas,%20Nikolaos&rft.date=2024-12-01&rft.volume=8&rft.issue=4&rft.spage=1862&rft.epage=1874&rft.pages=1862-1874&rft.issn=2473-2400&rft.eissn=2473-2400&rft.coden=ITGCBM&rft_id=info:doi/10.1109/TGCN.2024.3372695&rft_dat=%3Ccrossref_ieee_%3E10_1109_TGCN_2024_3372695%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c266t-6968553d5d68616b28beeb37f30a92faf49740509fbd1ee1c0b420ca95ce11b33%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10458416&rfr_iscdi=true |