Loading…

Exact Diffusion for Distributed Optimization and Learning-Part I: Algorithm Development

This paper develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II of this paper to have a wider stability range and superior convergence performance than the...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal processing 2019-02, Vol.67 (3), p.708-723
Main Authors: Yuan, Kun, Ying, Bicheng, Zhao, Xiaochuan, Sayed, Ali H.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper develops a distributed optimization strategy with guaranteed exact convergence for a broad class of left-stochastic combination policies. The resulting exact diffusion strategy is shown in Part II of this paper to have a wider stability range and superior convergence performance than the EXTRA strategy. The exact diffusion method is applicable to locally balanced left-stochastic combination matrices which, compared to the conventional doubly stochastic matrix, are more general and able to endow the algorithm with faster convergence rates, more flexible step-size choices, and improved privacy-preserving properties. The derivation of the exact diffusion strategy relies on reformulating the aggregate optimization problem as a penalized problem and resorting to a diagonally weighted incremental construction. Detailed stability and convergence analyses are pursued in Part II of this paper and are facilitated by examining the evolution of the error dynamics in a transformed domain. Numerical simulations illustrate the theoretical conclusions.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2018.2875898