Loading…

Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration

Distributed stochastic gradient descent (D-SGD) with gradient compression has become a popular communication-efficient solution for accelerating optimization procedures in distributed learning systems like multi-robot systems. One commonly used method for gradient compression is Top-K sparsification...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of selected topics in signal processing 2024-04, Vol.18 (3), p.487-501
Main Authors: Ruan, Mengzhe, Yan, Guangfeng, Xiao, Yuanzhang, Song, Linqi, Xu, Weitao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c247t-d30d3e5344c271ea676759449b9f02eca679ba5924c31ed78b995de10a311b083
container_end_page 501
container_issue 3
container_start_page 487
container_title IEEE journal of selected topics in signal processing
container_volume 18
creator Ruan, Mengzhe
Yan, Guangfeng
Xiao, Yuanzhang
Song, Linqi
Xu, Weitao
description Distributed stochastic gradient descent (D-SGD) with gradient compression has become a popular communication-efficient solution for accelerating optimization procedures in distributed learning systems like multi-robot systems. One commonly used method for gradient compression is Top-K sparsification, which sparsifies the gradients by a fixed degree during model training. However, there has been a lack of an adaptive approach with a systematic treatment and analysis to adjust the sparsification degree to maximize the potential of the model's performance or training speed. This paper proposes a novel adaptive Top-K in Stochastic Gradient Descent framework that enables an adaptive degree of sparsification for each gradient descent step to optimize the convergence performance by balancing the trade-off between communication cost and convergence error with respect to the norm of gradients and the communication budget. Firstly, an upper bound of convergence error is derived for the adaptive sparsification scheme and the loss function. Secondly, we consider communication budget constraints and propose an optimization formulation for minimizing the deep model's convergence error under such constraints. We obtain an enhanced compression algorithm that significantly improves model accuracy under given communication budget constraints. Finally, we conduct numerical experiments on general image classification tasks using the MNIST, CIFAR-10 datasets. For the multi-robot collaboration tasks, we choose the object detection task on the PASCAL VOC dataset. The results demonstrate that the proposed adaptive Top-K algorithm in SGD achieves a significantly better convergence rate compared to state-of-the-art methods, even after considering error compensation.
doi_str_mv 10.1109/JSTSP.2024.3381373
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3100618450</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10493123</ieee_id><sourcerecordid>3100618450</sourcerecordid><originalsourceid>FETCH-LOGICAL-c247t-d30d3e5344c271ea676759449b9f02eca679ba5924c31ed78b995de10a311b083</originalsourceid><addsrcrecordid>eNpNkMtOwzAQRS0EEqXwA4hFJNYuHj-SeFm1UB5FIFrWluM4yFUbB8dB4u9J2i5YzdXonhnpIHQNZAJA5N3zar16n1BC-YSxHFjGTtAIJAdMeM5Ph8wo5kKwc3TRthtCRJYCHyEzLXUT3Y9N1r7BL4mrk9VinlQ-JDO_23W1Mzo6X-P7qnLG2Tomc9fG4Iou2jJZWh1qV38N3Gu3jQ5_-MLHnt1udeHDnr1EZ5XetvbqOMfo8-F-PXvEy7fF02y6xIbyLOKSkZJZwTg3NAOr0yzNhORcFrIi1Jp-IQstJOWGgS2zvJBSlBaIZgAFydkY3R7uNsF_d7aNauO7UPcvFQNCUsi5IH2LHlom-LYNtlJNcDsdfhUQNchUe5lqkKmOMnvo5gA5a-0_gEsGlLE_BJdwLw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3100618450</pqid></control><display><type>article</type><title>Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration</title><source>IEEE Xplore (Online service)</source><creator>Ruan, Mengzhe ; Yan, Guangfeng ; Xiao, Yuanzhang ; Song, Linqi ; Xu, Weitao</creator><creatorcontrib>Ruan, Mengzhe ; Yan, Guangfeng ; Xiao, Yuanzhang ; Song, Linqi ; Xu, Weitao</creatorcontrib><description>Distributed stochastic gradient descent (D-SGD) with gradient compression has become a popular communication-efficient solution for accelerating optimization procedures in distributed learning systems like multi-robot systems. One commonly used method for gradient compression is Top-K sparsification, which sparsifies the gradients by a fixed degree during model training. However, there has been a lack of an adaptive approach with a systematic treatment and analysis to adjust the sparsification degree to maximize the potential of the model's performance or training speed. This paper proposes a novel adaptive Top-K in Stochastic Gradient Descent framework that enables an adaptive degree of sparsification for each gradient descent step to optimize the convergence performance by balancing the trade-off between communication cost and convergence error with respect to the norm of gradients and the communication budget. Firstly, an upper bound of convergence error is derived for the adaptive sparsification scheme and the loss function. Secondly, we consider communication budget constraints and propose an optimization formulation for minimizing the deep model's convergence error under such constraints. We obtain an enhanced compression algorithm that significantly improves model accuracy under given communication budget constraints. Finally, we conduct numerical experiments on general image classification tasks using the MNIST, CIFAR-10 datasets. For the multi-robot collaboration tasks, we choose the object detection task on the PASCAL VOC dataset. The results demonstrate that the proposed adaptive Top-K algorithm in SGD achieves a significantly better convergence rate compared to state-of-the-art methods, even after considering error compensation.</description><identifier>ISSN: 1932-4553</identifier><identifier>EISSN: 1941-0484</identifier><identifier>DOI: 10.1109/JSTSP.2024.3381373</identifier><identifier>CODEN: IJSTGY</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation models ; Adaptive algorithms ; Budgets ; Collaboration ; Communication ; communication-efficient ; Computer aided instruction ; Constraints ; Convergence ; Cooperation ; Cost analysis ; Datasets ; Distance learning ; Distributed learning ; Error analysis ; Error compensation ; gradient sparsification ; Image classification ; Image compression ; Image enhancement ; Learning ; multi-robot collaboration ; Multiple robots ; Object recognition ; Optimization ; Quantization (signal) ; Robots ; Training ; Upper bounds</subject><ispartof>IEEE journal of selected topics in signal processing, 2024-04, Vol.18 (3), p.487-501</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c247t-d30d3e5344c271ea676759449b9f02eca679ba5924c31ed78b995de10a311b083</cites><orcidid>0009-0008-0458-5751 ; 0000-0003-2756-4984 ; 0000-0002-5821-8569 ; 0000-0001-9741-5912 ; 0000-0002-0936-9467</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10493123$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Ruan, Mengzhe</creatorcontrib><creatorcontrib>Yan, Guangfeng</creatorcontrib><creatorcontrib>Xiao, Yuanzhang</creatorcontrib><creatorcontrib>Song, Linqi</creatorcontrib><creatorcontrib>Xu, Weitao</creatorcontrib><title>Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration</title><title>IEEE journal of selected topics in signal processing</title><addtitle>JSTSP</addtitle><description>Distributed stochastic gradient descent (D-SGD) with gradient compression has become a popular communication-efficient solution for accelerating optimization procedures in distributed learning systems like multi-robot systems. One commonly used method for gradient compression is Top-K sparsification, which sparsifies the gradients by a fixed degree during model training. However, there has been a lack of an adaptive approach with a systematic treatment and analysis to adjust the sparsification degree to maximize the potential of the model's performance or training speed. This paper proposes a novel adaptive Top-K in Stochastic Gradient Descent framework that enables an adaptive degree of sparsification for each gradient descent step to optimize the convergence performance by balancing the trade-off between communication cost and convergence error with respect to the norm of gradients and the communication budget. Firstly, an upper bound of convergence error is derived for the adaptive sparsification scheme and the loss function. Secondly, we consider communication budget constraints and propose an optimization formulation for minimizing the deep model's convergence error under such constraints. We obtain an enhanced compression algorithm that significantly improves model accuracy under given communication budget constraints. Finally, we conduct numerical experiments on general image classification tasks using the MNIST, CIFAR-10 datasets. For the multi-robot collaboration tasks, we choose the object detection task on the PASCAL VOC dataset. The results demonstrate that the proposed adaptive Top-K algorithm in SGD achieves a significantly better convergence rate compared to state-of-the-art methods, even after considering error compensation.</description><subject>Adaptation models</subject><subject>Adaptive algorithms</subject><subject>Budgets</subject><subject>Collaboration</subject><subject>Communication</subject><subject>communication-efficient</subject><subject>Computer aided instruction</subject><subject>Constraints</subject><subject>Convergence</subject><subject>Cooperation</subject><subject>Cost analysis</subject><subject>Datasets</subject><subject>Distance learning</subject><subject>Distributed learning</subject><subject>Error analysis</subject><subject>Error compensation</subject><subject>gradient sparsification</subject><subject>Image classification</subject><subject>Image compression</subject><subject>Image enhancement</subject><subject>Learning</subject><subject>multi-robot collaboration</subject><subject>Multiple robots</subject><subject>Object recognition</subject><subject>Optimization</subject><subject>Quantization (signal)</subject><subject>Robots</subject><subject>Training</subject><subject>Upper bounds</subject><issn>1932-4553</issn><issn>1941-0484</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkMtOwzAQRS0EEqXwA4hFJNYuHj-SeFm1UB5FIFrWluM4yFUbB8dB4u9J2i5YzdXonhnpIHQNZAJA5N3zar16n1BC-YSxHFjGTtAIJAdMeM5Ph8wo5kKwc3TRthtCRJYCHyEzLXUT3Y9N1r7BL4mrk9VinlQ-JDO_23W1Mzo6X-P7qnLG2Tomc9fG4Iou2jJZWh1qV38N3Gu3jQ5_-MLHnt1udeHDnr1EZ5XetvbqOMfo8-F-PXvEy7fF02y6xIbyLOKSkZJZwTg3NAOr0yzNhORcFrIi1Jp-IQstJOWGgS2zvJBSlBaIZgAFydkY3R7uNsF_d7aNauO7UPcvFQNCUsi5IH2LHlom-LYNtlJNcDsdfhUQNchUe5lqkKmOMnvo5gA5a-0_gEsGlLE_BJdwLw</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Ruan, Mengzhe</creator><creator>Yan, Guangfeng</creator><creator>Xiao, Yuanzhang</creator><creator>Song, Linqi</creator><creator>Xu, Weitao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope><orcidid>https://orcid.org/0009-0008-0458-5751</orcidid><orcidid>https://orcid.org/0000-0003-2756-4984</orcidid><orcidid>https://orcid.org/0000-0002-5821-8569</orcidid><orcidid>https://orcid.org/0000-0001-9741-5912</orcidid><orcidid>https://orcid.org/0000-0002-0936-9467</orcidid></search><sort><creationdate>20240401</creationdate><title>Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration</title><author>Ruan, Mengzhe ; Yan, Guangfeng ; Xiao, Yuanzhang ; Song, Linqi ; Xu, Weitao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c247t-d30d3e5344c271ea676759449b9f02eca679ba5924c31ed78b995de10a311b083</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation models</topic><topic>Adaptive algorithms</topic><topic>Budgets</topic><topic>Collaboration</topic><topic>Communication</topic><topic>communication-efficient</topic><topic>Computer aided instruction</topic><topic>Constraints</topic><topic>Convergence</topic><topic>Cooperation</topic><topic>Cost analysis</topic><topic>Datasets</topic><topic>Distance learning</topic><topic>Distributed learning</topic><topic>Error analysis</topic><topic>Error compensation</topic><topic>gradient sparsification</topic><topic>Image classification</topic><topic>Image compression</topic><topic>Image enhancement</topic><topic>Learning</topic><topic>multi-robot collaboration</topic><topic>Multiple robots</topic><topic>Object recognition</topic><topic>Optimization</topic><topic>Quantization (signal)</topic><topic>Robots</topic><topic>Training</topic><topic>Upper bounds</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ruan, Mengzhe</creatorcontrib><creatorcontrib>Yan, Guangfeng</creatorcontrib><creatorcontrib>Xiao, Yuanzhang</creatorcontrib><creatorcontrib>Song, Linqi</creatorcontrib><creatorcontrib>Xu, Weitao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE journal of selected topics in signal processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ruan, Mengzhe</au><au>Yan, Guangfeng</au><au>Xiao, Yuanzhang</au><au>Song, Linqi</au><au>Xu, Weitao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration</atitle><jtitle>IEEE journal of selected topics in signal processing</jtitle><stitle>JSTSP</stitle><date>2024-04-01</date><risdate>2024</risdate><volume>18</volume><issue>3</issue><spage>487</spage><epage>501</epage><pages>487-501</pages><issn>1932-4553</issn><eissn>1941-0484</eissn><coden>IJSTGY</coden><abstract>Distributed stochastic gradient descent (D-SGD) with gradient compression has become a popular communication-efficient solution for accelerating optimization procedures in distributed learning systems like multi-robot systems. One commonly used method for gradient compression is Top-K sparsification, which sparsifies the gradients by a fixed degree during model training. However, there has been a lack of an adaptive approach with a systematic treatment and analysis to adjust the sparsification degree to maximize the potential of the model's performance or training speed. This paper proposes a novel adaptive Top-K in Stochastic Gradient Descent framework that enables an adaptive degree of sparsification for each gradient descent step to optimize the convergence performance by balancing the trade-off between communication cost and convergence error with respect to the norm of gradients and the communication budget. Firstly, an upper bound of convergence error is derived for the adaptive sparsification scheme and the loss function. Secondly, we consider communication budget constraints and propose an optimization formulation for minimizing the deep model's convergence error under such constraints. We obtain an enhanced compression algorithm that significantly improves model accuracy under given communication budget constraints. Finally, we conduct numerical experiments on general image classification tasks using the MNIST, CIFAR-10 datasets. For the multi-robot collaboration tasks, we choose the object detection task on the PASCAL VOC dataset. The results demonstrate that the proposed adaptive Top-K algorithm in SGD achieves a significantly better convergence rate compared to state-of-the-art methods, even after considering error compensation.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSTSP.2024.3381373</doi><tpages>15</tpages><orcidid>https://orcid.org/0009-0008-0458-5751</orcidid><orcidid>https://orcid.org/0000-0003-2756-4984</orcidid><orcidid>https://orcid.org/0000-0002-5821-8569</orcidid><orcidid>https://orcid.org/0000-0001-9741-5912</orcidid><orcidid>https://orcid.org/0000-0002-0936-9467</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1932-4553
ispartof IEEE journal of selected topics in signal processing, 2024-04, Vol.18 (3), p.487-501
issn 1932-4553
1941-0484
language eng
recordid cdi_proquest_journals_3100618450
source IEEE Xplore (Online service)
subjects Adaptation models
Adaptive algorithms
Budgets
Collaboration
Communication
communication-efficient
Computer aided instruction
Constraints
Convergence
Cooperation
Cost analysis
Datasets
Distance learning
Distributed learning
Error analysis
Error compensation
gradient sparsification
Image classification
Image compression
Image enhancement
Learning
multi-robot collaboration
Multiple robots
Object recognition
Optimization
Quantization (signal)
Robots
Training
Upper bounds
title Adaptive Top-K in SGD for Communication-Efficient Distributed Learning in Multi-Robot Collaboration
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T06%3A28%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Top-K%20in%20SGD%20for%20Communication-Efficient%20Distributed%20Learning%20in%20Multi-Robot%20Collaboration&rft.jtitle=IEEE%20journal%20of%20selected%20topics%20in%20signal%20processing&rft.au=Ruan,%20Mengzhe&rft.date=2024-04-01&rft.volume=18&rft.issue=3&rft.spage=487&rft.epage=501&rft.pages=487-501&rft.issn=1932-4553&rft.eissn=1941-0484&rft.coden=IJSTGY&rft_id=info:doi/10.1109/JSTSP.2024.3381373&rft_dat=%3Cproquest_cross%3E3100618450%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c247t-d30d3e5344c271ea676759449b9f02eca679ba5924c31ed78b995de10a311b083%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3100618450&rft_id=info:pmid/&rft_ieee_id=10493123&rfr_iscdi=true