Loading…

Stabilized distributed online mirror descent for multi-agent optimization

In the domain of multi-agent networks, distributed online mirror descent (DOMD) and distributed online dual averaging (DODA) play pivotal roles as fundamental algorithms for distributed online convex optimization. However, in contrast to DODA, DOMD fails when employed with a dynamic learning rate se...

Full description

Saved in:
Bibliographic Details
Published in:Knowledge-based systems 2024-11, Vol.304, p.112582, Article 112582
Main Authors: Wu, Ping, Huang, Heyan, Lu, Haolin, Liu, Zhengyang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In the domain of multi-agent networks, distributed online mirror descent (DOMD) and distributed online dual averaging (DODA) play pivotal roles as fundamental algorithms for distributed online convex optimization. However, in contrast to DODA, DOMD fails when employed with a dynamic learning rate sequence. To bridge this gap, we introduce two novel variants of DOMD by incorporating a distributed stabilization step in primal space and dual space, respectively. We demonstrate that our stabilized DOMD algorithms achieve a sublinear bound with a sequence of dynamic learning rates. We further evolve our dual-stabilized DOMD by integrating a lazy communicated subgradient descent step, resulting in a re-indexed DODA. This establishes a connection between the two types of distributed algorithms, which enhances our understandings of distributed optimization. Moreover, we extend our proposed algorithms to handle the case of exponentiated gradient, where the iterate is constrained within a simplex probability. Finally, we conduct extensive numerical simulations to validate our theoretical analysis. •We introduce two new DOMD variants, addressing issues with dynamic learning rates, and provide regret bounds.•Enhancing our dual-stabilized DOMD with a lazy subgradient descent step, we establish a connection with re-indexed DODA.•Our algorithms outperform standard DOMD, especially when handling exponentiated subgradient.•Numerical simulations on distributed regression and classification confirm our theoretical advancements.
ISSN:0950-7051
DOI:10.1016/j.knosys.2024.112582