Loading…
Privacy-Preserving distributed deep learning based on secret sharing
Distributed deep learning (DDL) naturally provides a privacy-preserving solution to enable multiple parties to jointly learn a deep model without explicitly sharing the local datasets. However, the existing privacy-preserving DDL schemes still suffer from severe information leakage and/or lead to si...
Saved in:
Published in: | Information sciences 2020-07, Vol.527, p.108-127 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Distributed deep learning (DDL) naturally provides a privacy-preserving solution to enable multiple parties to jointly learn a deep model without explicitly sharing the local datasets. However, the existing privacy-preserving DDL schemes still suffer from severe information leakage and/or lead to significant increase of the communication cost. In this work, we design a privacy-preserving DDL framework such that all the participants can keep their local datasets private with low communication and computational cost, while still maintaining the accuracy and efficiency of the learned model. By adopting an effective secret sharing strategy, we allow each participant to split the intervening parameters in the training process into shares and upload an aggregation result to the cloud server. We can theoretically show that the local dataset of a particular participant can be well protected against the honest-but-curious cloud server as well as the other participants, even under the challenging case that the cloud server colludes with some participants. Extensive experimental results are provided to validate the superiority of the proposed secret sharing based distributed deep learning (SSDDL) framework. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/j.ins.2020.03.074 |