Loading…

Stochastic Strongly Convex Optimization via Distributed Epoch Stochastic Gradient Algorithm

This article considers the problem of stochastic strongly convex optimization over a network of multiple interacting nodes. The optimization is under a global inequality constraint and the restriction that nodes have only access to the stochastic gradients of their objective functions. We propose an...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2021-06, Vol.32 (6), p.2344-2357
Main Authors: Yuan, Deming, Ho, Daniel W. C., Xu, Shengyuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This article considers the problem of stochastic strongly convex optimization over a network of multiple interacting nodes. The optimization is under a global inequality constraint and the restriction that nodes have only access to the stochastic gradients of their objective functions. We propose an efficient distributed non-primal-dual algorithm, by incorporating the inequality constraint into the objective via a smoothing technique. We show that the proposed algorithm achieves an optimal \mathcal {O}((1)/(T)) ( T is the total number of iterations) convergence rate in the mean square distance from the optimal solution. In particular, we establish a high probability bound for the proposed algorithm, by showing that with a probability at least 1-\delta , the proposed algorithm converges at a rate of \mathcal {O}(\ln (\ln (T)/\delta)/ T) . Finally, we provide numerical experiments to demonstrate the efficacy of the proposed algorithm.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2020.3004723