Loading…

Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks

The primal-dual method of multipliers (PDMM) was originally designed for solving a decomposable optimisation problem over a general network. In this paper, we revisit PDMM for optimisation over a centralised network. We first note that the recently proposed method FedSplit [1] implements PDMM for a...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on signal and information processing over networks 2022, Vol.8, p.228-243
Main Authors: Zhang, Guoqiang, Niwa, Kenta, Kleijn, W. Bastiaan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023
cites cdi_FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023
container_end_page 243
container_issue
container_start_page 228
container_title IEEE transactions on signal and information processing over networks
container_volume 8
creator Zhang, Guoqiang
Niwa, Kenta
Kleijn, W. Bastiaan
description The primal-dual method of multipliers (PDMM) was originally designed for solving a decomposable optimisation problem over a general network. In this paper, we revisit PDMM for optimisation over a centralised network. We first note that the recently proposed method FedSplit [1] implements PDMM for a centralised network. In [1], Inexact FedSplit (i.e., gradient based FedSplit) was also studied both empirically and theoretically. We identify the cause for the poor reported performance of Inexact FedSplit, which is due to the suboptimal initialisation in the gradient operations at the client side. To fix the issue of Inexact FedSplit, we propose two versions of Inexact PDMM, which are referred to as gradient-based PDMM (GPDMM) and accelerated GPDMM (AGPDMM), respectively. AGPDMM accelerates GPDMM at the cost of transmitting two times the number of parameters from the server to each client per iteration compared to GPDMM. We provide a new convergence bound for GPDMM for a class of convex optimisation problems. Our new bounds are tighter than those derived for Inexact FedSplit. We also investigate the update expressions of AGPDMM and SCAFFOLD to find their similarities. It is found that when the number K of gradient steps at the client side per iteration is K=1 , both AGPDMM and SCAFFOLD reduce to vanilla gradient descent with proper parameter setup. Experimental results indicate that AGPDMM converges faster than SCAFFOLD when K >1 while GPDMM converges slightly slower than SCAFFOLD.
doi_str_mv 10.1109/TSIPN.2022.3161077
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2649790111</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9739857</ieee_id><sourcerecordid>2649790111</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRSMEElXpD8DGEusUP-I4XqLyqtSXoEhILCLjTKhLGgfbKeLvSWmFWM0s7rmjOVF0TvCQECyvlk_jxWxIMaVDRlKChTiKepQJFguRvhz_20-jgfdrjDHhIhFS9qLXR9gab4Kp31FYAVo4s1FVfNOqCk0hrGyBbImmbRVMUxlwHpXWoXkTzMZ4FYyt0XwLDo2gDk5VxkOBZhC-rPvwZ9FJqSoPg8PsR893t8vRQzyZ349H15NYU8lDrEXCCowBBMkYL3ApccJVmijNk-RN04xhTgihGoRWgDmUpSw5S9NMa80wZf3oct_bOPvZgg_52rau7k7mNE2kkLjDuxTdp7Sz3jso82b3q_vOCc53HvNfj_nOY37w2EEXe8gAwB8gBZMZF-wHGLBvUg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2649790111</pqid></control><display><type>article</type><title>Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zhang, Guoqiang ; Niwa, Kenta ; Kleijn, W. Bastiaan</creator><creatorcontrib>Zhang, Guoqiang ; Niwa, Kenta ; Kleijn, W. Bastiaan</creatorcontrib><description>The primal-dual method of multipliers (PDMM) was originally designed for solving a decomposable optimisation problem over a general network. In this paper, we revisit PDMM for optimisation over a centralised network. We first note that the recently proposed method FedSplit [1] implements PDMM for a centralised network. In [1], Inexact FedSplit (i.e., gradient based FedSplit) was also studied both empirically and theoretically. We identify the cause for the poor reported performance of Inexact FedSplit, which is due to the suboptimal initialisation in the gradient operations at the client side. To fix the issue of Inexact FedSplit, we propose two versions of Inexact PDMM, which are referred to as gradient-based PDMM (GPDMM) and accelerated GPDMM (AGPDMM), respectively. AGPDMM accelerates GPDMM at the cost of transmitting two times the number of parameters from the server to each client per iteration compared to GPDMM. We provide a new convergence bound for GPDMM for a class of convex optimisation problems. Our new bounds are tighter than those derived for Inexact FedSplit. We also investigate the update expressions of AGPDMM and SCAFFOLD to find their similarities. It is found that when the number K of gradient steps at the client side per iteration is K=1 , both AGPDMM and SCAFFOLD reduce to vanilla gradient descent with proper parameter setup. Experimental results indicate that AGPDMM converges faster than SCAFFOLD when K &gt;1 while GPDMM converges slightly slower than SCAFFOLD.</description><identifier>ISSN: 2373-776X</identifier><identifier>EISSN: 2373-776X</identifier><identifier>EISSN: 2373-7778</identifier><identifier>DOI: 10.1109/TSIPN.2022.3161077</identifier><identifier>CODEN: ITSIBW</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Convergence ; Distributed optimisation ; fedsplit ; Handheld computers ; Information processing ; Multipliers ; Optimization ; Parameters ; PDMM ; Peer-to-peer computing ; SCAFFOLD ; Scaffolds ; Scalability ; Servers</subject><ispartof>IEEE transactions on signal and information processing over networks, 2022, Vol.8, p.228-243</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023</citedby><cites>FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023</cites><orcidid>0000-0002-1973-3920 ; 0000-0003-4521-542X ; 0000-0002-6911-0238</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9739857$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4024,27923,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zhang, Guoqiang</creatorcontrib><creatorcontrib>Niwa, Kenta</creatorcontrib><creatorcontrib>Kleijn, W. Bastiaan</creatorcontrib><title>Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks</title><title>IEEE transactions on signal and information processing over networks</title><addtitle>TSIPN</addtitle><description>The primal-dual method of multipliers (PDMM) was originally designed for solving a decomposable optimisation problem over a general network. In this paper, we revisit PDMM for optimisation over a centralised network. We first note that the recently proposed method FedSplit [1] implements PDMM for a centralised network. In [1], Inexact FedSplit (i.e., gradient based FedSplit) was also studied both empirically and theoretically. We identify the cause for the poor reported performance of Inexact FedSplit, which is due to the suboptimal initialisation in the gradient operations at the client side. To fix the issue of Inexact FedSplit, we propose two versions of Inexact PDMM, which are referred to as gradient-based PDMM (GPDMM) and accelerated GPDMM (AGPDMM), respectively. AGPDMM accelerates GPDMM at the cost of transmitting two times the number of parameters from the server to each client per iteration compared to GPDMM. We provide a new convergence bound for GPDMM for a class of convex optimisation problems. Our new bounds are tighter than those derived for Inexact FedSplit. We also investigate the update expressions of AGPDMM and SCAFFOLD to find their similarities. It is found that when the number K of gradient steps at the client side per iteration is K=1 , both AGPDMM and SCAFFOLD reduce to vanilla gradient descent with proper parameter setup. Experimental results indicate that AGPDMM converges faster than SCAFFOLD when K &gt;1 while GPDMM converges slightly slower than SCAFFOLD.</description><subject>Convergence</subject><subject>Distributed optimisation</subject><subject>fedsplit</subject><subject>Handheld computers</subject><subject>Information processing</subject><subject>Multipliers</subject><subject>Optimization</subject><subject>Parameters</subject><subject>PDMM</subject><subject>Peer-to-peer computing</subject><subject>SCAFFOLD</subject><subject>Scaffolds</subject><subject>Scalability</subject><subject>Servers</subject><issn>2373-776X</issn><issn>2373-776X</issn><issn>2373-7778</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpNkMtOwzAQRSMEElXpD8DGEusUP-I4XqLyqtSXoEhILCLjTKhLGgfbKeLvSWmFWM0s7rmjOVF0TvCQECyvlk_jxWxIMaVDRlKChTiKepQJFguRvhz_20-jgfdrjDHhIhFS9qLXR9gab4Kp31FYAVo4s1FVfNOqCk0hrGyBbImmbRVMUxlwHpXWoXkTzMZ4FYyt0XwLDo2gDk5VxkOBZhC-rPvwZ9FJqSoPg8PsR893t8vRQzyZ349H15NYU8lDrEXCCowBBMkYL3ApccJVmijNk-RN04xhTgihGoRWgDmUpSw5S9NMa80wZf3oct_bOPvZgg_52rau7k7mNE2kkLjDuxTdp7Sz3jso82b3q_vOCc53HvNfj_nOY37w2EEXe8gAwB8gBZMZF-wHGLBvUg</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Zhang, Guoqiang</creator><creator>Niwa, Kenta</creator><creator>Kleijn, W. Bastiaan</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-1973-3920</orcidid><orcidid>https://orcid.org/0000-0003-4521-542X</orcidid><orcidid>https://orcid.org/0000-0002-6911-0238</orcidid></search><sort><creationdate>2022</creationdate><title>Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks</title><author>Zhang, Guoqiang ; Niwa, Kenta ; Kleijn, W. Bastiaan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Convergence</topic><topic>Distributed optimisation</topic><topic>fedsplit</topic><topic>Handheld computers</topic><topic>Information processing</topic><topic>Multipliers</topic><topic>Optimization</topic><topic>Parameters</topic><topic>PDMM</topic><topic>Peer-to-peer computing</topic><topic>SCAFFOLD</topic><topic>Scaffolds</topic><topic>Scalability</topic><topic>Servers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Guoqiang</creatorcontrib><creatorcontrib>Niwa, Kenta</creatorcontrib><creatorcontrib>Kleijn, W. Bastiaan</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on signal and information processing over networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Guoqiang</au><au>Niwa, Kenta</au><au>Kleijn, W. Bastiaan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks</atitle><jtitle>IEEE transactions on signal and information processing over networks</jtitle><stitle>TSIPN</stitle><date>2022</date><risdate>2022</risdate><volume>8</volume><spage>228</spage><epage>243</epage><pages>228-243</pages><issn>2373-776X</issn><eissn>2373-776X</eissn><eissn>2373-7778</eissn><coden>ITSIBW</coden><abstract>The primal-dual method of multipliers (PDMM) was originally designed for solving a decomposable optimisation problem over a general network. In this paper, we revisit PDMM for optimisation over a centralised network. We first note that the recently proposed method FedSplit [1] implements PDMM for a centralised network. In [1], Inexact FedSplit (i.e., gradient based FedSplit) was also studied both empirically and theoretically. We identify the cause for the poor reported performance of Inexact FedSplit, which is due to the suboptimal initialisation in the gradient operations at the client side. To fix the issue of Inexact FedSplit, we propose two versions of Inexact PDMM, which are referred to as gradient-based PDMM (GPDMM) and accelerated GPDMM (AGPDMM), respectively. AGPDMM accelerates GPDMM at the cost of transmitting two times the number of parameters from the server to each client per iteration compared to GPDMM. We provide a new convergence bound for GPDMM for a class of convex optimisation problems. Our new bounds are tighter than those derived for Inexact FedSplit. We also investigate the update expressions of AGPDMM and SCAFFOLD to find their similarities. It is found that when the number K of gradient steps at the client side per iteration is K=1 , both AGPDMM and SCAFFOLD reduce to vanilla gradient descent with proper parameter setup. Experimental results indicate that AGPDMM converges faster than SCAFFOLD when K &gt;1 while GPDMM converges slightly slower than SCAFFOLD.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/TSIPN.2022.3161077</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0002-1973-3920</orcidid><orcidid>https://orcid.org/0000-0003-4521-542X</orcidid><orcidid>https://orcid.org/0000-0002-6911-0238</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2373-776X
ispartof IEEE transactions on signal and information processing over networks, 2022, Vol.8, p.228-243
issn 2373-776X
2373-776X
2373-7778
language eng
recordid cdi_proquest_journals_2649790111
source IEEE Electronic Library (IEL) Journals
subjects Convergence
Distributed optimisation
fedsplit
Handheld computers
Information processing
Multipliers
Optimization
Parameters
PDMM
Peer-to-peer computing
SCAFFOLD
Scaffolds
Scalability
Servers
title Revisiting the Primal-Dual Method of Multipliers for Optimisation Over Centralised Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T22%3A18%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Revisiting%20the%20Primal-Dual%20Method%20of%20Multipliers%20for%20Optimisation%20Over%20Centralised%20Networks&rft.jtitle=IEEE%20transactions%20on%20signal%20and%20information%20processing%20over%20networks&rft.au=Zhang,%20Guoqiang&rft.date=2022&rft.volume=8&rft.spage=228&rft.epage=243&rft.pages=228-243&rft.issn=2373-776X&rft.eissn=2373-776X&rft.coden=ITSIBW&rft_id=info:doi/10.1109/TSIPN.2022.3161077&rft_dat=%3Cproquest_cross%3E2649790111%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c295t-c743d00ee71835d0f9045a64ac544bc283051112ce7cae05eff9f53668ccc3023%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2649790111&rft_id=info:pmid/&rft_ieee_id=9739857&rfr_iscdi=true