Loading…
Sort-then-insert: A space efficient and oblivious model aggregation algorithm for top-k sparsification in federated learning
Federated Learning (FL) allows multiple clients to collaboratively train machine learning models while preserving the model privacy of the clients. However, when generating a global model during the aggregation process, a malicious FL server could derive clients’ local model weights. Such a threat c...
Saved in:
Published in: | Future generation computer systems 2024-09, Vol.158, p.1-10 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Federated Learning (FL) allows multiple clients to collaboratively train machine learning models while preserving the model privacy of the clients. However, when generating a global model during the aggregation process, a malicious FL server could derive clients’ local model weights. Such a threat cannot be completely eliminated, even if model aggregation is performed in the Trusted Execution Environment of the server, due to memory access pattern attacks. To tackle this challenge, our paper focuses on the top-k model aggregation algorithm in FL and introduces a space efficient and oblivious sparsified model aggregation algorithm named Sort-Then-Insert (STI), which removes the dependency of the memory access pattern on the model input, thus protecting the confidentiality of clients’ models. Compared with Path ORAM, STI is over 100 times faster, and compared with the state-of-the-art solution for the same problem, Olive, our theoretical analysis and experiments demonstrate that STI can achieve comparable performance overhead (O(nklog2(nk))+O(dlog2(d)) for STI vs. O((nk+d)log2(nk+d)) for Olive) and reduced space overhead (max(O(nk),O(d)) for STI vs. O(nk+d) for Olive).
•A top-k model aggregation algorithm for FL achieving memory access obliviousness.•Similar time complexity and reduced space complexity, compared with existing works.•Evaluated through experiments using two sorting networks and different parameters. |
---|---|
ISSN: | 0167-739X 1872-7115 |
DOI: | 10.1016/j.future.2024.04.022 |