Loading…
Joint Model Pruning and Device Selection for Communication-Efficient Federated Edge Learning
In recent years, wireless federated learning (FL) has been proposed to support the mobile intelligent applications over the wireless network, which protects the data privacy and security by exchanging the parameter between mobile devices and the base station (BS). However, the learning latency incre...
Saved in:
Published in: | IEEE transactions on communications 2022-01, Vol.70 (1), p.231-244 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In recent years, wireless federated learning (FL) has been proposed to support the mobile intelligent applications over the wireless network, which protects the data privacy and security by exchanging the parameter between mobile devices and the base station (BS). However, the learning latency increases with the neural network scale due to the limited local computing power and communication bandwidth. To tackle this issue, we introduce model pruning for wireless FL to reduce the neural network scale. Device selection is also considered to further improve the learning performance. By removing the stragglers with low computing power or bad channel condition, the model aggregation loss caused by model pruning can be alleviated and the communication overhead can be effectively reduced. We analyze the convergence rate and learning latency of the proposed model pruning method and formulate an optimization problem to maximize the convergence rate under the given learning latency budget via jointly optimizing the pruning ratio, device selection, and wireless resource allocation. By solving the problem, the closed-form solutions of pruning ratio and wireless resource allocation are derived and the threshold-based device selection strategy is developed. Finally, extensive experiments are carried out to demonstrate that the proposed model pruning algorithm outperforms other existing schemes. |
---|---|
ISSN: | 0090-6778 1558-0857 |
DOI: | 10.1109/TCOMM.2021.3124961 |