Loading…

PVFL: Verifiable federated learning and prediction with privacy-preserving

Machine learning has been applied in a wide range of various fields. To train more effective control models, it is a trend for organizations holding their private data to collaborate with others, which raises privacy problems. Federated learning allows multiple participants to train learning models...

Full description

Saved in:
Bibliographic Details
Published in:Computers & security 2024-04, Vol.139, p.103700, Article 103700
Main Authors: Yin, Benxin, Zhang, Hanlin, Lin, Jie, Kong, Fanyu, Yu, Leyun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Machine learning has been applied in a wide range of various fields. To train more effective control models, it is a trend for organizations holding their private data to collaborate with others, which raises privacy problems. Federated learning allows multiple participants to train learning models collaboratively without sharing their private training data but only the gradient. Nonetheless, recent researches show that sharing the gradient can also cause the original data leakage. To eliminate the data leakage of federated learning, in this paper, we propose a secure multiparty federated learning control system, including a secure training process and a secure prediction process. In the training process, data providers train the learning model collaboratively without disclosing local data. The trained model can be verified by participants. The data providers can then provide users with prediction services based on the trained model. In the prediction process, data providers cannot access the user data, and users cannot obtain the model. Also, the prediction result can be verified by users. We carry out comprehensive experiments to demonstrate the effectiveness of our proposed scheme.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2024.103700