Loading…

Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion

The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers o...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control 2022-08, Vol.67 (8), p.3949-3964
Main Authors: Swenson, Brian, Murray, Ryan, Poor, H. Vincent, Kar, Soummya
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents' objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2021.3111853