Loading…

Federated Learning in NOMA Networks: Convergence, Energy and Fairness-Based Design

Federated Learning (FL) is a collaborative machine learning (ML) approach, where different nodes in a network contribute to learning the model parameters. In addition, FL provides several attractive features such as data privacy and energy efficiency. Due to its collaborative nature, model parameter...

Full description

Saved in:
Bibliographic Details
Main Authors: Mrad, Ilyes, Samara, Lutfi, Al-Abbasi, Abubakr, Hamila, Ridha, Erbad, Aiman, Kiranyaz, Serkan
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated Learning (FL) is a collaborative machine learning (ML) approach, where different nodes in a network contribute to learning the model parameters. In addition, FL provides several attractive features such as data privacy and energy efficiency. Due to its collaborative nature, model parameters among nodes should be efficiently exchanged, while considering the scarce availability of clean spectral slots. In this work, we propose low-power efficient algorithms for FL of model parameters updates. We consider mobile edge nodes connected to a leading node (LD) with practical wireless links, where uplink updates from the nodes to the LD are shared without orthogonalizing the resources. In particular, we adopt a non-orthogonal multiple access (NOMA) uplink scheme, and investigate its effect on the convergence round (CR) of the model updates. Through deriving an analytical expression of the CR, we leverage it to formulate an optimization problem to minimize the total number of communication rounds and maximize the communication fairness among the nodes. We further investigate the performance of our proposed algorithms by considering different factors, including limited per-node energy and node heterogeneity. Monte-Carlo simulations are used to verify the accuracy of our derived expression of the CR. Moreover, through comprehensive simulation, we show that our proposed schemes largely reduce the communication latency between the LD and the nodes, and improve the communication fairness among the nodes.
ISSN:2166-9589
DOI:10.1109/PIMRC54779.2022.9977962