Loading…

Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion

The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers o...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on automatic control 2022-08, Vol.67 (8), p.3949-3964
Main Authors: Swenson, Brian, Murray, Ryan, Poor, H. Vincent, Kar, Soummya
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83
cites cdi_FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83
container_end_page 3964
container_issue 8
container_start_page 3949
container_title IEEE transactions on automatic control
container_volume 67
creator Swenson, Brian
Murray, Ryan
Poor, H. Vincent
Kar, Soummya
description The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents' objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.
doi_str_mv 10.1109/TAC.2021.3111853
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2695157739</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9535220</ieee_id><sourcerecordid>2695157739</sourcerecordid><originalsourceid>FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83</originalsourceid><addsrcrecordid>eNo9kEFPAjEUhBujiYjeTbxs4pXFvnZbWm8EAU2IkojnptC3sQS22C4o_95FiJc3mWRmXvIRcgu0C0D1w6w_6DLKoMsBQAl-RloghMqZYPyctCgFlWum5CW5SmnZWFkU0CLTJ5_q6OfbGl02jtZ5rOpstArfj9lrqNI6hPqzwpQ6B7sI1Q5_fL3vZLZy2bt1boXZNPimM9zZ5EN1TS5Ku0p4c9I2-RgNZ4PnfPI2fhn0J_mCaahzLebUWlnyAlWvLKTVes604hJ0r8eltgpAoNPAsCz_LqM91Kx0oiikU7xN7o-7mxi-tphqswzbWDUvDZNagGhmdJOix9QihpQilmYT_drGvQFqDtxMw80cuJkTt6Zyd6x4RPyPa8EFY5T_AsdGaAM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2695157739</pqid></control><display><type>article</type><title>Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Swenson, Brian ; Murray, Ryan ; Poor, H. Vincent ; Kar, Soummya</creator><creatorcontrib>Swenson, Brian ; Murray, Ryan ; Poor, H. Vincent ; Kar, Soummya</creatorcontrib><description>The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents' objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.</description><identifier>ISSN: 0018-9286</identifier><identifier>EISSN: 1558-2523</identifier><identifier>DOI: 10.1109/TAC.2021.3111853</identifier><identifier>CODEN: IETAA9</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Convergence ; Critical point ; Distributed optimization ; gradient descent ; Gradient flow ; Heuristic algorithms ; Initial conditions ; Linear programming ; Manifolds ; Multiagent systems ; nonconvex optimization ; nonsmooth optimization ; Optimization ; saddle point ; Saddle points ; stable manifold ; Trajectory</subject><ispartof>IEEE transactions on automatic control, 2022-08, Vol.67 (8), p.3949-3964</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83</citedby><cites>FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83</cites><orcidid>0000-0002-2062-131X ; 0000-0002-1710-5584 ; 0000-0002-4491-4096 ; 0000-0002-8060-5581</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9535220$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Swenson, Brian</creatorcontrib><creatorcontrib>Murray, Ryan</creatorcontrib><creatorcontrib>Poor, H. Vincent</creatorcontrib><creatorcontrib>Kar, Soummya</creatorcontrib><title>Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion</title><title>IEEE transactions on automatic control</title><addtitle>TAC</addtitle><description>The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents' objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.</description><subject>Algorithms</subject><subject>Convergence</subject><subject>Critical point</subject><subject>Distributed optimization</subject><subject>gradient descent</subject><subject>Gradient flow</subject><subject>Heuristic algorithms</subject><subject>Initial conditions</subject><subject>Linear programming</subject><subject>Manifolds</subject><subject>Multiagent systems</subject><subject>nonconvex optimization</subject><subject>nonsmooth optimization</subject><subject>Optimization</subject><subject>saddle point</subject><subject>Saddle points</subject><subject>stable manifold</subject><subject>Trajectory</subject><issn>0018-9286</issn><issn>1558-2523</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kEFPAjEUhBujiYjeTbxs4pXFvnZbWm8EAU2IkojnptC3sQS22C4o_95FiJc3mWRmXvIRcgu0C0D1w6w_6DLKoMsBQAl-RloghMqZYPyctCgFlWum5CW5SmnZWFkU0CLTJ5_q6OfbGl02jtZ5rOpstArfj9lrqNI6hPqzwpQ6B7sI1Q5_fL3vZLZy2bt1boXZNPimM9zZ5EN1TS5Ku0p4c9I2-RgNZ4PnfPI2fhn0J_mCaahzLebUWlnyAlWvLKTVes604hJ0r8eltgpAoNPAsCz_LqM91Kx0oiikU7xN7o-7mxi-tphqswzbWDUvDZNagGhmdJOix9QihpQilmYT_drGvQFqDtxMw80cuJkTt6Zyd6x4RPyPa8EFY5T_AsdGaAM</recordid><startdate>20220801</startdate><enddate>20220801</enddate><creator>Swenson, Brian</creator><creator>Murray, Ryan</creator><creator>Poor, H. Vincent</creator><creator>Kar, Soummya</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7TB</scope><scope>8FD</scope><scope>FR3</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-2062-131X</orcidid><orcidid>https://orcid.org/0000-0002-1710-5584</orcidid><orcidid>https://orcid.org/0000-0002-4491-4096</orcidid><orcidid>https://orcid.org/0000-0002-8060-5581</orcidid></search><sort><creationdate>20220801</creationdate><title>Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion</title><author>Swenson, Brian ; Murray, Ryan ; Poor, H. Vincent ; Kar, Soummya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Convergence</topic><topic>Critical point</topic><topic>Distributed optimization</topic><topic>gradient descent</topic><topic>Gradient flow</topic><topic>Heuristic algorithms</topic><topic>Initial conditions</topic><topic>Linear programming</topic><topic>Manifolds</topic><topic>Multiagent systems</topic><topic>nonconvex optimization</topic><topic>nonsmooth optimization</topic><topic>Optimization</topic><topic>saddle point</topic><topic>Saddle points</topic><topic>stable manifold</topic><topic>Trajectory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Swenson, Brian</creatorcontrib><creatorcontrib>Murray, Ryan</creatorcontrib><creatorcontrib>Poor, H. Vincent</creatorcontrib><creatorcontrib>Kar, Soummya</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on automatic control</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Swenson, Brian</au><au>Murray, Ryan</au><au>Poor, H. Vincent</au><au>Kar, Soummya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion</atitle><jtitle>IEEE transactions on automatic control</jtitle><stitle>TAC</stitle><date>2022-08-01</date><risdate>2022</risdate><volume>67</volume><issue>8</issue><spage>3949</spage><epage>3964</epage><pages>3949-3964</pages><issn>0018-9286</issn><eissn>1558-2523</eissn><coden>IETAA9</coden><abstract>The article considers distributed gradient flow (DGF) for multiagent nonconvex optimization. DGF is a continuous-time approximation of distributed gradient descent that is often easier to study than its discrete-time counterpart. The article has two main contributions. First, the article considers optimization of nonsmooth, nonconvex objective functions. It is shown that DGF converges to critical points in this setting. The article then considers the problem of avoiding saddle points. It is shown that if agents' objective functions are assumed to be smooth and nonconvex, then DGF can only converge to a saddle point from a zero-measure set of initial conditions. To establish this result, the article proves a stable manifold theorem for DGF, which is a fundamental contribution of independent interest. In a companion article, analogous results are derived for discrete-time algorithms.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TAC.2021.3111853</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-2062-131X</orcidid><orcidid>https://orcid.org/0000-0002-1710-5584</orcidid><orcidid>https://orcid.org/0000-0002-4491-4096</orcidid><orcidid>https://orcid.org/0000-0002-8060-5581</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0018-9286
ispartof IEEE transactions on automatic control, 2022-08, Vol.67 (8), p.3949-3964
issn 0018-9286
1558-2523
language eng
recordid cdi_proquest_journals_2695157739
source IEEE Electronic Library (IEL) Journals
subjects Algorithms
Convergence
Critical point
Distributed optimization
gradient descent
Gradient flow
Heuristic algorithms
Initial conditions
Linear programming
Manifolds
Multiagent systems
nonconvex optimization
nonsmooth optimization
Optimization
saddle point
Saddle points
stable manifold
Trajectory
title Distributed Gradient Flow: Nonsmoothness, Nonconvexity, and Saddle Point Evasion
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T18%3A54%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Distributed%20Gradient%20Flow:%20Nonsmoothness,%20Nonconvexity,%20and%20Saddle%20Point%20Evasion&rft.jtitle=IEEE%20transactions%20on%20automatic%20control&rft.au=Swenson,%20Brian&rft.date=2022-08-01&rft.volume=67&rft.issue=8&rft.spage=3949&rft.epage=3964&rft.pages=3949-3964&rft.issn=0018-9286&rft.eissn=1558-2523&rft.coden=IETAA9&rft_id=info:doi/10.1109/TAC.2021.3111853&rft_dat=%3Cproquest_ieee_%3E2695157739%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c291t-95b0aa6f34e87f46a99b298361977369a8115ed912eff912ef207e92fd5446d83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2695157739&rft_id=info:pmid/&rft_ieee_id=9535220&rfr_iscdi=true