Loading…
Secure and accurate personalized federated learning with similarity-based model aggregation
Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides,...
Saved in:
Published in: | IEEE transactions on sustainable computing 2024, p.1-14 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 14 |
container_issue | |
container_start_page | 1 |
container_title | IEEE transactions on sustainable computing |
container_volume | |
creator | Tan, Zhouyong Le, Junqing Yang, Fan Huang, Min Xiang, Tao Liao, Xiaofeng |
description | Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a Privacy-preserving Personalized Federated Learning under Secure Multi-party Computation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients' privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves 2% ∼ 15% higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks |
doi_str_mv | 10.1109/TSUSC.2024.3403427 |
format | article |
fullrecord | <record><control><sourceid>crossref_ieee_</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TSUSC_2024_3403427</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10535193</ieee_id><sourcerecordid>10_1109_TSUSC_2024_3403427</sourcerecordid><originalsourceid>FETCH-LOGICAL-c134t-7cff6ea861025d38fa92a5cefe5385818943ebdad184f88152e75f086d5ad93b3</originalsourceid><addsrcrecordid>eNpNkMtqwzAQRUVpoSHND5Qu9AN2JY0VycsS-oJAF0lWXZiJNXJVHDtILiX9-ua1yGouczl3cRi7lyKXUpSPy8VqMcuVUEUOhYBCmSs2UmBMBsaq64t8yyYpfQshpDG6VHLEPhdU_0Ti2DmO9T7iQHxLMfUdtuGPHPfk6PB1vCWMXega_huGL57CJrQYw7DL1pj29aZ31HJsmkgNDqHv7tiNxzbR5HzHbPXyvJy9ZfOP1_fZ0zyrJRRDZmrvp4R2KoXSDqzHUqGuyZMGq620ZQG0duikLby1Uisy2gs7dRpdCWsYM3XarWOfUiRfbWPYYNxVUlQHQ9XRUHUwVJ0N7aGHExSI6ALQoGUJ8A8d5GSS</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Secure and accurate personalized federated learning with similarity-based model aggregation</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Tan, Zhouyong ; Le, Junqing ; Yang, Fan ; Huang, Min ; Xiang, Tao ; Liao, Xiaofeng</creator><creatorcontrib>Tan, Zhouyong ; Le, Junqing ; Yang, Fan ; Huang, Min ; Xiang, Tao ; Liao, Xiaofeng</creatorcontrib><description>Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a Privacy-preserving Personalized Federated Learning under Secure Multi-party Computation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients' privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves 2% ∼ 15% higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks</description><identifier>ISSN: 2377-3782</identifier><identifier>EISSN: 2377-3782</identifier><identifier>EISSN: 2377-3790</identifier><identifier>DOI: 10.1109/TSUSC.2024.3403427</identifier><identifier>CODEN: ITSCBE</identifier><language>eng</language><publisher>IEEE</publisher><subject>Adaptation models ; Computational modeling ; Data models ; Federated learning ; Personalized federated learning ; Predictive models ; Privacy ; Privacy protection ; Secure aggregation ; Servers ; Similarity metric</subject><ispartof>IEEE transactions on sustainable computing, 2024, p.1-14</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10535193$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4024,27923,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Tan, Zhouyong</creatorcontrib><creatorcontrib>Le, Junqing</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Huang, Min</creatorcontrib><creatorcontrib>Xiang, Tao</creatorcontrib><creatorcontrib>Liao, Xiaofeng</creatorcontrib><title>Secure and accurate personalized federated learning with similarity-based model aggregation</title><title>IEEE transactions on sustainable computing</title><addtitle>TSUSC</addtitle><description>Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a Privacy-preserving Personalized Federated Learning under Secure Multi-party Computation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients' privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves 2% ∼ 15% higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks</description><subject>Adaptation models</subject><subject>Computational modeling</subject><subject>Data models</subject><subject>Federated learning</subject><subject>Personalized federated learning</subject><subject>Predictive models</subject><subject>Privacy</subject><subject>Privacy protection</subject><subject>Secure aggregation</subject><subject>Servers</subject><subject>Similarity metric</subject><issn>2377-3782</issn><issn>2377-3782</issn><issn>2377-3790</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpNkMtqwzAQRUVpoSHND5Qu9AN2JY0VycsS-oJAF0lWXZiJNXJVHDtILiX9-ua1yGouczl3cRi7lyKXUpSPy8VqMcuVUEUOhYBCmSs2UmBMBsaq64t8yyYpfQshpDG6VHLEPhdU_0Ti2DmO9T7iQHxLMfUdtuGPHPfk6PB1vCWMXega_huGL57CJrQYw7DL1pj29aZ31HJsmkgNDqHv7tiNxzbR5HzHbPXyvJy9ZfOP1_fZ0zyrJRRDZmrvp4R2KoXSDqzHUqGuyZMGq620ZQG0duikLby1Uisy2gs7dRpdCWsYM3XarWOfUiRfbWPYYNxVUlQHQ9XRUHUwVJ0N7aGHExSI6ALQoGUJ8A8d5GSS</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Tan, Zhouyong</creator><creator>Le, Junqing</creator><creator>Yang, Fan</creator><creator>Huang, Min</creator><creator>Xiang, Tao</creator><creator>Liao, Xiaofeng</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>2024</creationdate><title>Secure and accurate personalized federated learning with similarity-based model aggregation</title><author>Tan, Zhouyong ; Le, Junqing ; Yang, Fan ; Huang, Min ; Xiang, Tao ; Liao, Xiaofeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c134t-7cff6ea861025d38fa92a5cefe5385818943ebdad184f88152e75f086d5ad93b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation models</topic><topic>Computational modeling</topic><topic>Data models</topic><topic>Federated learning</topic><topic>Personalized federated learning</topic><topic>Predictive models</topic><topic>Privacy</topic><topic>Privacy protection</topic><topic>Secure aggregation</topic><topic>Servers</topic><topic>Similarity metric</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tan, Zhouyong</creatorcontrib><creatorcontrib>Le, Junqing</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Huang, Min</creatorcontrib><creatorcontrib>Xiang, Tao</creatorcontrib><creatorcontrib>Liao, Xiaofeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE</collection><collection>CrossRef</collection><jtitle>IEEE transactions on sustainable computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tan, Zhouyong</au><au>Le, Junqing</au><au>Yang, Fan</au><au>Huang, Min</au><au>Xiang, Tao</au><au>Liao, Xiaofeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Secure and accurate personalized federated learning with similarity-based model aggregation</atitle><jtitle>IEEE transactions on sustainable computing</jtitle><stitle>TSUSC</stitle><date>2024</date><risdate>2024</risdate><spage>1</spage><epage>14</epage><pages>1-14</pages><issn>2377-3782</issn><eissn>2377-3782</eissn><eissn>2377-3790</eissn><coden>ITSCBE</coden><abstract>Personalized federated learning (PFL) combines client needs and data characteristics to train personalized models for local clients. However, the most of previous PFL schemes encountered challenges such as low model prediction accuracy and privacy leakage when applied to practical datasets. Besides, the existing privacy protection methods fail to achieve satisfactory results in terms of model prediction accuracy and security simultaneously. In this paper, we propose a Privacy-preserving Personalized Federated Learning under Secure Multi-party Computation (SMC-PPFL), which can preserve privacy while obtaining a local personalized model with high prediction accuracy. In SMC-PPFL, noise perturbation is utilized to protect similarity computation, and secure multi-party computation is employed for model sub-aggregations. This combination ensures that clients' privacy is preserved, and the computed values remain unbiased without compromising security. Then, we propose a weighted sub-aggregation strategy based on the similarity of clients and introduce a regularization term in the local training to improve prediction accuracy. Finally, we evaluate the performance of SMC-PPFL on three common datasets. The experimental results show that SMC-PPFL achieves 2% ∼ 15% higher prediction accuracy compared to the previous PFL schemes. Besides, the security analysis also verifies that SMC-PPFL can resist model inversion attacks and membership inference attacks</abstract><pub>IEEE</pub><doi>10.1109/TSUSC.2024.3403427</doi><tpages>14</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2377-3782 |
ispartof | IEEE transactions on sustainable computing, 2024, p.1-14 |
issn | 2377-3782 2377-3782 2377-3790 |
language | eng |
recordid | cdi_crossref_primary_10_1109_TSUSC_2024_3403427 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Adaptation models Computational modeling Data models Federated learning Personalized federated learning Predictive models Privacy Privacy protection Secure aggregation Servers Similarity metric |
title | Secure and accurate personalized federated learning with similarity-based model aggregation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T22%3A44%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Secure%20and%20accurate%20personalized%20federated%20learning%20with%20similarity-based%20model%20aggregation&rft.jtitle=IEEE%20transactions%20on%20sustainable%20computing&rft.au=Tan,%20Zhouyong&rft.date=2024&rft.spage=1&rft.epage=14&rft.pages=1-14&rft.issn=2377-3782&rft.eissn=2377-3782&rft.coden=ITSCBE&rft_id=info:doi/10.1109/TSUSC.2024.3403427&rft_dat=%3Ccrossref_ieee_%3E10_1109_TSUSC_2024_3403427%3C/crossref_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c134t-7cff6ea861025d38fa92a5cefe5385818943ebdad184f88152e75f086d5ad93b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10535193&rfr_iscdi=true |