Loading…

A distributed proximal gradient method with time-varying delays for solving additive convex optimizations

We consider the problem of minimizing a finite sum of differentiable and nondifferentiable convex functions in the setting of finite-dimensional Euclidean space. We propose and analyze a distributed proximal gradient method with computational delays. The occurrence of local delays when computing loc...

Full description

Saved in:
Bibliographic Details
Published in:Results in applied mathematics 2023-05, Vol.18, p.100370, Article 100370
Main Authors: Namsak, Sakrapee, Petrot, Narin, Nimana, Nimit
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We consider the problem of minimizing a finite sum of differentiable and nondifferentiable convex functions in the setting of finite-dimensional Euclidean space. We propose and analyze a distributed proximal gradient method with computational delays. The occurrence of local delays when computing local gradient of each differentiable cost function allows the use of out-of-date iterates when generating the next estimates, which benefits a situation where the cost of gradient computation is expensive so that it cannot be done within a limited time constraints. We provide a condition on control parameter to guarantee that the sequences generated by the proposed method converge to the unique solution. We finally illustrate the presented theoretical results by performing some numerical experiments on binary image classification.
ISSN:2590-0374
2590-0374
DOI:10.1016/j.rinam.2023.100370