Loading…
Distributed Nesterov Gradient and Heavy-Ball Double Accelerated Asynchronous Optimization
In this article, we come up with a novel Nesterov gradient and heavy-ball double accelerated distributed synchronous optimization algorithm, called NHDA, and adopt a general asynchronous model to further propose an effective asynchronous algorithm, called ASY-NHDA, for distributed optimization probl...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2021-12, Vol.32 (12), p.5723-5737 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this article, we come up with a novel Nesterov gradient and heavy-ball double accelerated distributed synchronous optimization algorithm, called NHDA, and adopt a general asynchronous model to further propose an effective asynchronous algorithm, called ASY-NHDA, for distributed optimization problem over directed graphs, where each agent has access to a local objective function and computes the optimal solution via communicating only with its immediate neighbors. Our goal is to minimize a sum of all local objective functions satisfying strong convexity and Lipschitz continuity. Consider a general asynchronous model, where agents communicate with their immediate neighbors and start a new computation independently, that is, agents can communicate with their neighbors at any time without any coordination and use delayed information from their in-neighbors to compute a new update. Delays are arbitrary, unpredictable, and time-varying but bounded. The theoretical analysis of NHDA is based on analyzing the interaction among the consensus, the gradient tracking, and the optimization processes. As for the analysis of ASY-NHDA, we equivalently transform the asynchronous system into an augmented synchronous system without delays and prove its convergence through using the generalized small gain theorem. The results show that NHDA and ASY-NHDA converge to the optimal solution at a linear convergence as long as the largest step size is positive and less than an explicitly estimated upper bound, and the largest momentum parameter is nonnegative and less than an upper bound. Finally, we demonstrate the advantages of ASY-NHDA through simulations. |
---|---|
ISSN: | 2162-237X 2162-2388 |
DOI: | 10.1109/TNNLS.2020.3027381 |