Loading…

Generalization Bounds for Stochastic Gradient Descent via Localized \(\varepsilon\)-Covers

In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in ex...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-09
Main Authors: Park, Sejun, Umut \c{S}imşekli, Erdogdu, Murat A
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in exponential dimension dependency. Based on this localized construction, we show that if the objective function is a finite perturbation of a piecewise strongly convex and smooth function with \(P\) pieces, i.e. non-convex and non-smooth in general, the generalization error can be upper bounded by \(O(\sqrt{(\log n\log(nP))/n})\), where \(n\) is the number of data samples. In particular, this rate is independent of dimension and does not require early stopping and decaying step size. Finally, we employ these results in various contexts and derive generalization bounds for multi-index linear models, multi-class support vector machines, and \(K\)-means clustering for both hard and soft label setups, improving the known state-of-the-art rates.
ISSN:2331-8422