Loading…
Joint learning and optimization for Federated Learning in NOMA-based networks
Over the past decade, the usage of machine learning (ML) techniques have increased substantially in different applications. Federated Learning (FL) refers to collaborative techniques that avoid the exchange of raw data between the nodes in a distributed training task. This addresses important issues...
Saved in:
Published in: | Pervasive and mobile computing 2023-02, Vol.89, p.101739, Article 101739 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3 |
---|---|
cites | cdi_FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3 |
container_end_page | |
container_issue | |
container_start_page | 101739 |
container_title | Pervasive and mobile computing |
container_volume | 89 |
creator | Mrad, Ilyes Hamila, Ridha Erbad, Aiman Gabbouj, Moncef |
description | Over the past decade, the usage of machine learning (ML) techniques have increased substantially in different applications. Federated Learning (FL) refers to collaborative techniques that avoid the exchange of raw data between the nodes in a distributed training task. This addresses important issues such as data privacy, energy consumption, and the limited availability of clean spectral slots. In this work, we investigate the performance of FL updates with edge devices connected to a leading device (LD) with practical wireless links, where uplink updates from the edge devices to the LD are shared without orthogonalizing the resources. In particular, we adopt a non-orthogonal multiple access (NOMA) uplink scheme, and analytically investigate its effect on the convergence round (CR) and the accuracy of the FL model. Moreover, we formulate an optimization problem that aims at minimizing the CR, and further guarantees communication fairness between the users while considering the per-device energy consumption figures and the accuracy of the realized global FL model. Monte-Carlo simulations prove the reliability of our derived analytical expressions and reveal the importance of adopting a joint optimization approach that demonstrates a significant reduction in communication latency, while taking into account user fairness in the NOMA network, improving the energy consumption figures and yielding acceptable accuracy levels when compared with several baselines. |
doi_str_mv | 10.1016/j.pmcj.2022.101739 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_pmcj_2022_101739</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1574119222001523</els_id><sourcerecordid>S1574119222001523</sourcerecordid><originalsourceid>FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqXwA6z8Ayl-JU4kNlVFeailG1hbjj1BDq1d2RYIvp5EgS2rGV3dMxodhK4pWVBCq5t-cTyYfsEIY2MgeXOCZrSWvKBl3ZwOeylFQWnDztFFSj0hggpJZmj7FJzPeA86euffsPYWh2N2B_etswsedyHiNViIOoPFm7-e8_h5t10WrU5D7CF_hvieLtFZp_cJrn7nHL2u715WD8Vmd_-4Wm4KwwnJRUcENK20ti01UC45tJZz0E3Nq5ZDTQinTVXRWgATjLVlY5iglWDGdlpWls8Rm-6aGFKK0KljdAcdvxQlahSiejUKUaMQNQkZoNsJguGzDwdRJePAG7AugsnKBvcf_gMydWln</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Joint learning and optimization for Federated Learning in NOMA-based networks</title><source>Elsevier</source><creator>Mrad, Ilyes ; Hamila, Ridha ; Erbad, Aiman ; Gabbouj, Moncef</creator><creatorcontrib>Mrad, Ilyes ; Hamila, Ridha ; Erbad, Aiman ; Gabbouj, Moncef</creatorcontrib><description>Over the past decade, the usage of machine learning (ML) techniques have increased substantially in different applications. Federated Learning (FL) refers to collaborative techniques that avoid the exchange of raw data between the nodes in a distributed training task. This addresses important issues such as data privacy, energy consumption, and the limited availability of clean spectral slots. In this work, we investigate the performance of FL updates with edge devices connected to a leading device (LD) with practical wireless links, where uplink updates from the edge devices to the LD are shared without orthogonalizing the resources. In particular, we adopt a non-orthogonal multiple access (NOMA) uplink scheme, and analytically investigate its effect on the convergence round (CR) and the accuracy of the FL model. Moreover, we formulate an optimization problem that aims at minimizing the CR, and further guarantees communication fairness between the users while considering the per-device energy consumption figures and the accuracy of the realized global FL model. Monte-Carlo simulations prove the reliability of our derived analytical expressions and reveal the importance of adopting a joint optimization approach that demonstrates a significant reduction in communication latency, while taking into account user fairness in the NOMA network, improving the energy consumption figures and yielding acceptable accuracy levels when compared with several baselines.</description><identifier>ISSN: 1574-1192</identifier><identifier>EISSN: 1873-1589</identifier><identifier>DOI: 10.1016/j.pmcj.2022.101739</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Access fairness ; Energy ; Federated Learning ; Model accuracy ; Non-orthogonal multiple access (NOMA)</subject><ispartof>Pervasive and mobile computing, 2023-02, Vol.89, p.101739, Article 101739</ispartof><rights>2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3</citedby><cites>FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3</cites><orcidid>0000-0003-3953-802X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27900,27901</link.rule.ids></links><search><creatorcontrib>Mrad, Ilyes</creatorcontrib><creatorcontrib>Hamila, Ridha</creatorcontrib><creatorcontrib>Erbad, Aiman</creatorcontrib><creatorcontrib>Gabbouj, Moncef</creatorcontrib><title>Joint learning and optimization for Federated Learning in NOMA-based networks</title><title>Pervasive and mobile computing</title><description>Over the past decade, the usage of machine learning (ML) techniques have increased substantially in different applications. Federated Learning (FL) refers to collaborative techniques that avoid the exchange of raw data between the nodes in a distributed training task. This addresses important issues such as data privacy, energy consumption, and the limited availability of clean spectral slots. In this work, we investigate the performance of FL updates with edge devices connected to a leading device (LD) with practical wireless links, where uplink updates from the edge devices to the LD are shared without orthogonalizing the resources. In particular, we adopt a non-orthogonal multiple access (NOMA) uplink scheme, and analytically investigate its effect on the convergence round (CR) and the accuracy of the FL model. Moreover, we formulate an optimization problem that aims at minimizing the CR, and further guarantees communication fairness between the users while considering the per-device energy consumption figures and the accuracy of the realized global FL model. Monte-Carlo simulations prove the reliability of our derived analytical expressions and reveal the importance of adopting a joint optimization approach that demonstrates a significant reduction in communication latency, while taking into account user fairness in the NOMA network, improving the energy consumption figures and yielding acceptable accuracy levels when compared with several baselines.</description><subject>Access fairness</subject><subject>Energy</subject><subject>Federated Learning</subject><subject>Model accuracy</subject><subject>Non-orthogonal multiple access (NOMA)</subject><issn>1574-1192</issn><issn>1873-1589</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EEqXwA6z8Ayl-JU4kNlVFeailG1hbjj1BDq1d2RYIvp5EgS2rGV3dMxodhK4pWVBCq5t-cTyYfsEIY2MgeXOCZrSWvKBl3ZwOeylFQWnDztFFSj0hggpJZmj7FJzPeA86euffsPYWh2N2B_etswsedyHiNViIOoPFm7-e8_h5t10WrU5D7CF_hvieLtFZp_cJrn7nHL2u715WD8Vmd_-4Wm4KwwnJRUcENK20ti01UC45tJZz0E3Nq5ZDTQinTVXRWgATjLVlY5iglWDGdlpWls8Rm-6aGFKK0KljdAcdvxQlahSiejUKUaMQNQkZoNsJguGzDwdRJePAG7AugsnKBvcf_gMydWln</recordid><startdate>202302</startdate><enddate>202302</enddate><creator>Mrad, Ilyes</creator><creator>Hamila, Ridha</creator><creator>Erbad, Aiman</creator><creator>Gabbouj, Moncef</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-3953-802X</orcidid></search><sort><creationdate>202302</creationdate><title>Joint learning and optimization for Federated Learning in NOMA-based networks</title><author>Mrad, Ilyes ; Hamila, Ridha ; Erbad, Aiman ; Gabbouj, Moncef</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Access fairness</topic><topic>Energy</topic><topic>Federated Learning</topic><topic>Model accuracy</topic><topic>Non-orthogonal multiple access (NOMA)</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mrad, Ilyes</creatorcontrib><creatorcontrib>Hamila, Ridha</creatorcontrib><creatorcontrib>Erbad, Aiman</creatorcontrib><creatorcontrib>Gabbouj, Moncef</creatorcontrib><collection>CrossRef</collection><jtitle>Pervasive and mobile computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mrad, Ilyes</au><au>Hamila, Ridha</au><au>Erbad, Aiman</au><au>Gabbouj, Moncef</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Joint learning and optimization for Federated Learning in NOMA-based networks</atitle><jtitle>Pervasive and mobile computing</jtitle><date>2023-02</date><risdate>2023</risdate><volume>89</volume><spage>101739</spage><pages>101739-</pages><artnum>101739</artnum><issn>1574-1192</issn><eissn>1873-1589</eissn><abstract>Over the past decade, the usage of machine learning (ML) techniques have increased substantially in different applications. Federated Learning (FL) refers to collaborative techniques that avoid the exchange of raw data between the nodes in a distributed training task. This addresses important issues such as data privacy, energy consumption, and the limited availability of clean spectral slots. In this work, we investigate the performance of FL updates with edge devices connected to a leading device (LD) with practical wireless links, where uplink updates from the edge devices to the LD are shared without orthogonalizing the resources. In particular, we adopt a non-orthogonal multiple access (NOMA) uplink scheme, and analytically investigate its effect on the convergence round (CR) and the accuracy of the FL model. Moreover, we formulate an optimization problem that aims at minimizing the CR, and further guarantees communication fairness between the users while considering the per-device energy consumption figures and the accuracy of the realized global FL model. Monte-Carlo simulations prove the reliability of our derived analytical expressions and reveal the importance of adopting a joint optimization approach that demonstrates a significant reduction in communication latency, while taking into account user fairness in the NOMA network, improving the energy consumption figures and yielding acceptable accuracy levels when compared with several baselines.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.pmcj.2022.101739</doi><orcidid>https://orcid.org/0000-0003-3953-802X</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1574-1192 |
ispartof | Pervasive and mobile computing, 2023-02, Vol.89, p.101739, Article 101739 |
issn | 1574-1192 1873-1589 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_pmcj_2022_101739 |
source | Elsevier |
subjects | Access fairness Energy Federated Learning Model accuracy Non-orthogonal multiple access (NOMA) |
title | Joint learning and optimization for Federated Learning in NOMA-based networks |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-24T17%3A08%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Joint%20learning%20and%20optimization%20for%20Federated%20Learning%20in%20NOMA-based%20networks&rft.jtitle=Pervasive%20and%20mobile%20computing&rft.au=Mrad,%20Ilyes&rft.date=2023-02&rft.volume=89&rft.spage=101739&rft.pages=101739-&rft.artnum=101739&rft.issn=1574-1192&rft.eissn=1873-1589&rft_id=info:doi/10.1016/j.pmcj.2022.101739&rft_dat=%3Celsevier_cross%3ES1574119222001523%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c300t-f04e9b7ddb5ae1373ebd33ea9836b3e80031966184e2422b59c241642cdfa76d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |