Loading…
On Improved Distributed Random Reshuffling over Networks
In this paper, we consider a distributed optimization problem. A network of n agents, each with its own local loss function, aims to collaboratively minimize the global average loss. We prove improved convergence results for two recently proposed random reshuffling (RR) based algorithms, D-RR and GT...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper, we consider a distributed optimization problem. A network of n agents, each with its own local loss function, aims to collaboratively minimize the global average loss. We prove improved convergence results for two recently proposed random reshuffling (RR) based algorithms, D-RR and GT-RR, for smooth strongly-convex and nonconvex problems, respectively. In particular, we prove an additional speedup with increasing n in both cases. Our experiments show that these methods can provide further communication savings by carrying multiple gradient steps between successive communications while also outperforming decentralized SGD. Our experiments also reveal a gap in the theoretical understanding of these methods in the nonconvex case. |
---|---|
ISSN: | 2379-190X |
DOI: | 10.1109/ICASSP48485.2024.10447202 |