Loading…

Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices

While federated learning (FL) systems often utilize quan-tization to battle communication and computational bottle-necks, they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile, the concept of mixed-precision quantization (MPQ), where different layers of a dee...

Full description

Saved in:
Bibliographic Details
Main Authors: Chen, Huancheng, Vikalo, Haris
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 6148
container_issue
container_start_page 6138
container_title
container_volume
creator Chen, Huancheng
Vikalo, Haris
description While federated learning (FL) systems often utilize quan-tization to battle communication and computational bottle-necks, they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile, the concept of mixed-precision quantization (MPQ), where different layers of a deep learning model are assigned varying bit-width, remains unexplored in the FL settings. We present a novel FL algorithm, FedMPQ, which introduces mixed-precision quantization to resource-heterogeneous FL systems. Specifically, local models, quantized so as to satisfy bit-width constraint, are trained by optimizing an objective function that includes a regularization term which promotes reduction of precision in some of the layers without significant performance degradation. The server collects local model updates, de-quantizes them into full-precision models, and then aggregates them into a global model. To initialize the next round of local training, the server relies on the information learned in the previous training round to customize bit-width assignments of the models delivered to different clients. In extensive benchmarking experiments on several model architectures and different datasets in both iid and non-iid settings, FedMPQ outperformed the baseline FL schemes that utilize fixed-precision quantization while in-curring only a minor computational overhead on the par-ticipating devices.
doi_str_mv 10.1109/CVPR52733.2024.00587
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10658402</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10658402</ieee_id><sourcerecordid>10658402</sourcerecordid><originalsourceid>FETCH-LOGICAL-i152t-5f50496235825626a436310b31e6968ea19a29626a4aaed7e07fd5d0442c2e1c3</originalsourceid><addsrcrecordid>eNotjF1LwzAUhqMgOGb_wS76B1pPkp6kuZTq3GDiHOrtOLanI6KpJJ2ov975cfXw8rw8QswklFKCO28e1xtUVutSgapKAKztkcicdbVG0KgBzLGYKLRYWLB4KrKUngFAKymNqyeCbvwHd8U6cuuTH0J-t6cw-i8af0Y_xHzOHUcauctXTDH4sMsPZsNp2MeWi2YIaYzkw-Gw4JHjsOPAwz7ll_zuW05n4qSnl8TZP6fiYX513yyK1e31srlYFV6iGgvsESpnlMZaoVGGKm20hCct2ThTM0lHyv0KIu4sg-077KCqVKtYtnoqZn9dz8zbt-hfKX5uJRisK1D6G8YzVsw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices</title><source>IEEE Xplore All Conference Series</source><creator>Chen, Huancheng ; Vikalo, Haris</creator><creatorcontrib>Chen, Huancheng ; Vikalo, Haris</creatorcontrib><description>While federated learning (FL) systems often utilize quan-tization to battle communication and computational bottle-necks, they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile, the concept of mixed-precision quantization (MPQ), where different layers of a deep learning model are assigned varying bit-width, remains unexplored in the FL settings. We present a novel FL algorithm, FedMPQ, which introduces mixed-precision quantization to resource-heterogeneous FL systems. Specifically, local models, quantized so as to satisfy bit-width constraint, are trained by optimizing an objective function that includes a regularization term which promotes reduction of precision in some of the layers without significant performance degradation. The server collects local model updates, de-quantizes them into full-precision models, and then aggregates them into a global model. To initialize the next round of local training, the server relies on the information learned in the previous training round to customize bit-width assignments of the models delivered to different clients. In extensive benchmarking experiments on several model architectures and different datasets in both iid and non-iid settings, FedMPQ outperformed the baseline FL schemes that utilize fixed-precision quantization while in-curring only a minor computational overhead on the par-ticipating devices.</description><identifier>EISSN: 2575-7075</identifier><identifier>EISBN: 9798350353006</identifier><identifier>DOI: 10.1109/CVPR52733.2024.00587</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Benchmark testing ; Computational modeling ; Deep learning ; Degradation ; Federated learning ; Quantization (signal) ; Training</subject><ispartof>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.6138-6148</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10658402$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10658402$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Chen, Huancheng</creatorcontrib><creatorcontrib>Vikalo, Haris</creatorcontrib><title>Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices</title><title>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title><addtitle>CVPR</addtitle><description>While federated learning (FL) systems often utilize quan-tization to battle communication and computational bottle-necks, they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile, the concept of mixed-precision quantization (MPQ), where different layers of a deep learning model are assigned varying bit-width, remains unexplored in the FL settings. We present a novel FL algorithm, FedMPQ, which introduces mixed-precision quantization to resource-heterogeneous FL systems. Specifically, local models, quantized so as to satisfy bit-width constraint, are trained by optimizing an objective function that includes a regularization term which promotes reduction of precision in some of the layers without significant performance degradation. The server collects local model updates, de-quantizes them into full-precision models, and then aggregates them into a global model. To initialize the next round of local training, the server relies on the information learned in the previous training round to customize bit-width assignments of the models delivered to different clients. In extensive benchmarking experiments on several model architectures and different datasets in both iid and non-iid settings, FedMPQ outperformed the baseline FL schemes that utilize fixed-precision quantization while in-curring only a minor computational overhead on the par-ticipating devices.</description><subject>Benchmark testing</subject><subject>Computational modeling</subject><subject>Deep learning</subject><subject>Degradation</subject><subject>Federated learning</subject><subject>Quantization (signal)</subject><subject>Training</subject><issn>2575-7075</issn><isbn>9798350353006</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjF1LwzAUhqMgOGb_wS76B1pPkp6kuZTq3GDiHOrtOLanI6KpJJ2ov975cfXw8rw8QswklFKCO28e1xtUVutSgapKAKztkcicdbVG0KgBzLGYKLRYWLB4KrKUngFAKymNqyeCbvwHd8U6cuuTH0J-t6cw-i8af0Y_xHzOHUcauctXTDH4sMsPZsNp2MeWi2YIaYzkw-Gw4JHjsOPAwz7ll_zuW05n4qSnl8TZP6fiYX513yyK1e31srlYFV6iGgvsESpnlMZaoVGGKm20hCct2ThTM0lHyv0KIu4sg-077KCqVKtYtnoqZn9dz8zbt-hfKX5uJRisK1D6G8YzVsw</recordid><startdate>20240616</startdate><enddate>20240616</enddate><creator>Chen, Huancheng</creator><creator>Vikalo, Haris</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240616</creationdate><title>Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices</title><author>Chen, Huancheng ; Vikalo, Haris</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i152t-5f50496235825626a436310b31e6968ea19a29626a4aaed7e07fd5d0442c2e1c3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmark testing</topic><topic>Computational modeling</topic><topic>Deep learning</topic><topic>Degradation</topic><topic>Federated learning</topic><topic>Quantization (signal)</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Huancheng</creatorcontrib><creatorcontrib>Vikalo, Haris</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Huancheng</au><au>Vikalo, Haris</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices</atitle><btitle>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</btitle><stitle>CVPR</stitle><date>2024-06-16</date><risdate>2024</risdate><spage>6138</spage><epage>6148</epage><pages>6138-6148</pages><eissn>2575-7075</eissn><eisbn>9798350353006</eisbn><coden>IEEPAD</coden><abstract>While federated learning (FL) systems often utilize quan-tization to battle communication and computational bottle-necks, they have heretofore been limited to deploying fixed-precision quantization schemes. Meanwhile, the concept of mixed-precision quantization (MPQ), where different layers of a deep learning model are assigned varying bit-width, remains unexplored in the FL settings. We present a novel FL algorithm, FedMPQ, which introduces mixed-precision quantization to resource-heterogeneous FL systems. Specifically, local models, quantized so as to satisfy bit-width constraint, are trained by optimizing an objective function that includes a regularization term which promotes reduction of precision in some of the layers without significant performance degradation. The server collects local model updates, de-quantizes them into full-precision models, and then aggregates them into a global model. To initialize the next round of local training, the server relies on the information learned in the previous training round to customize bit-width assignments of the models delivered to different clients. In extensive benchmarking experiments on several model architectures and different datasets in both iid and non-iid settings, FedMPQ outperformed the baseline FL schemes that utilize fixed-precision quantization while in-curring only a minor computational overhead on the par-ticipating devices.</abstract><pub>IEEE</pub><doi>10.1109/CVPR52733.2024.00587</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2575-7075
ispartof 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.6138-6148
issn 2575-7075
language eng
recordid cdi_ieee_primary_10658402
source IEEE Xplore All Conference Series
subjects Benchmark testing
Computational modeling
Deep learning
Degradation
Federated learning
Quantization (signal)
Training
title Mixed-Precision Quantization for Federated Learning on Resource-Constrained Heterogeneous Devices
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T20%3A51%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Mixed-Precision%20Quantization%20for%20Federated%20Learning%20on%20Resource-Constrained%20Heterogeneous%20Devices&rft.btitle=2024%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20(CVPR)&rft.au=Chen,%20Huancheng&rft.date=2024-06-16&rft.spage=6138&rft.epage=6148&rft.pages=6138-6148&rft.eissn=2575-7075&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPR52733.2024.00587&rft.eisbn=9798350353006&rft_dat=%3Cieee_CHZPO%3E10658402%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i152t-5f50496235825626a436310b31e6968ea19a29626a4aaed7e07fd5d0442c2e1c3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10658402&rfr_iscdi=true