Loading…

On stochastic gradient Langevin dynamics with dependent data streams in the logconcave case

We study the problem of sampling from a probability distribution \(\pi\) on \(\rset^d\) which has a density \wrt\ the Lebesgue measure known up to a normalization factor \(x \mapsto \rme^{-U(x)} / \int_{\rset^d} \rme^{-U(y)} \rmd y\). We analyze a sampling method based on the Euler discretization of...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2019-09
Main Authors: Barkhagen, M, Chau, N H, Moulines, É, Rásonyi, M, Sabanis, S, Zhang, Y
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:We study the problem of sampling from a probability distribution \(\pi\) on \(\rset^d\) which has a density \wrt\ the Lebesgue measure known up to a normalization factor \(x \mapsto \rme^{-U(x)} / \int_{\rset^d} \rme^{-U(y)} \rmd y\). We analyze a sampling method based on the Euler discretization of the Langevin stochastic differential equations under the assumptions that the potential \(U\) is continuously differentiable, \(\nabla U\) is Lipschitz, and \(U\) is strongly concave. We focus on the case where the gradient of the log-density cannot be directly computed but unbiased estimates of the gradient from possibly dependent observations are available. This setting can be seen as a combination of a stochastic approximation (here stochastic gradient) type algorithms with discretized Langevin dynamics. We obtain an upper bound of the Wasserstein-2 distance between the law of the iterates of this algorithm and the target distribution \(\pi\) with constants depending explicitly on the Lipschitz and strong convexity constants of the potential and the dimension of the space. Finally, under weaker assumptions on \(U\) and its gradient but in the presence of independent observations, we obtain analogous results in Wasserstein-2 distance.
ISSN:2331-8422