Loading…

Communication-efficient Federated Learning for Power Load Forecasting in Electric IoTs

With the construction of the modern power system, power load forecasting is significant to keep the electric Internet of Things in operation. However, it usually needs to collect massive power load data on the server and may face the problem of privacy leakage of raw data. Federated learning can enh...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023-01, Vol.11, p.1-1
Main Authors: Mao, Zhengxiong, Li, Hui, Huang, Zuyuan, Yang, Chuanxu, Li, Yanan, Zhou, Zihao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the construction of the modern power system, power load forecasting is significant to keep the electric Internet of Things in operation. However, it usually needs to collect massive power load data on the server and may face the problem of privacy leakage of raw data. Federated learning can enhance the privacy of the raw power load data of clients by frequently transmitting model updates. Concerning the increasing communication burden of resource-heterogeneous clients resulting from frequent communication with the server, a communication-efficient federated learning algorithm based on Compressed Model Updates and Lazy uploAd (CMULA-FL) was proposed to reduce the communication cost. CMULA-FL also integrates the error compensation strategy to improve the model utility. First, the compression operator is used to compress the transmitted model updates, of which large norms are uploaded to reduce the communication cost of each epoch and transmission frequency. Second, by measuring the error of compression and lazy upload, the error is accumulated to the next epoch to improve the model utility. Finally, based on simulation experiments on the benchmark power load data, the results show that the communication cost decreases at least 60% with controlled loss of model prediction compared with baseline.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3262171