Loading…

Toward Multiple Federated Learning Services Resource Sharing in Mobile Edge Networks

Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on mobile computing 2023-01, Vol.22 (1), p.541-555
Main Authors: Nguyen, Minh N. H., Tran, Nguyen H., Tun, Yan Kyaw, Han, Zhu, Hong, Choong Seon
Format: Magazinearticle
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated Learning is a new learning scheme for collaborative training a shared prediction model while keeping data locally on participating devices. In this paper, we study a new model of multiple federated learning services at the multi-access edge computing server. Accordingly, the sharing of CPU resources among learning services at each mobile device for the local training process and allocating communication resources among mobile devices for exchanging learning information must be considered. Furthermore, the convergence performance of different learning services depends on the hyper-learning rate parameter that needs to be precisely decided. Towards this end, we propose a joint resource optimization and hyper-learning rate control problem, namely {{\sf MS-FEDL}} MS-FEDL , regarding the energy consumption of mobile devices and overall learning time. We design a centralized algorithm based on the block coordinate descent method and a decentralized JP-miADMM algorithm for solving the {{\sf MS-FEDL}} MS-FEDL problem. Different from the centralized approach, the decentralized approach requires many iterations to obtain but it allows each learning service to independently manage the local resource and learning process without revealing the learning service information. Our simulation results demonstrate the convergence performance of our proposed algorithms and the superior performance of our proposed algorithms compared to the heuristic strategy.
ISSN:1536-1233
1558-0660
DOI:10.1109/TMC.2021.3085979