Loading…

FedQOGD: Federated Quantized Online Gradient Descent with Distributed Time-Series Data

We investigate an online federated learning (in short, OFL), in which many edge nodes receive their own time-series data and train a sequence of global models under the orchestration of a central server while keeping data localized. In this framework, we propose a communication efficient federated q...

Full description

Saved in:
Bibliographic Details
Main Authors: Park, Jonghwan, Kwon, Dohyeok, Hong, Songnam
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We investigate an online federated learning (in short, OFL), in which many edge nodes receive their own time-series data and train a sequence of global models under the orchestration of a central server while keeping data localized. In this framework, we propose a communication efficient federated quantized online gradient descent (FedQOGD) by means of a stochastic quantization and partial node participation. We theoretically prove that FedQOGD over T time slots can achieve an optimal sublinear regret bound {\mathcal{O}}(\sqrt T ) for any quantization level (e.g., 1-level quantization), even when every node can participate in a learning process sporadically. Our analysis reveals that FedQOGD yields the same asymptotic performance as the centralized counterpart (i.e., all local data are gathered at the central server) while having a low-communication overhead and preserving an edge-node privacy. Finally, we verify the effectiveness of our algorithm via experiments with a real-world MNIST dataset on online classification task.
ISSN:1558-2612
DOI:10.1109/WCNC51071.2022.9771579