Loading…

A double incremental aggregated gradient method with linear convergence rate for large-scale optimization

This paper considers the problem of minimizing the average of a finite set of strongly convex functions. We introduce a double incremental aggregated gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggre...

Full description

Saved in:
Bibliographic Details
Main Authors: Mokhtari, Aryan, Gurbuzbalaban, Mert, Ribeiro, Alejandro
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper considers the problem of minimizing the average of a finite set of strongly convex functions. We introduce a double incremental aggregated gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on full batch gradient descent. In particular, we show theoretically and empirically that one pass of DIAG is more efficient than one iteration of gradient descent.
ISSN:2379-190X
DOI:10.1109/ICASSP.2017.7953047