Loading…
Penalized Overdamped and Underdamped Langevin Monte Carlo Algorithms for Constrained Sampling
We consider the constrained sampling problem where the goal is to sample from a target distribution \(\pi(x)\propto e^{-f(x)}\) when \(x\) is constrained to lie on a convex body \(\mathcal{C}\). Motivated by penalty methods from continuous optimization, we propose penalized Langevin Dynamics (PLD) a...
Saved in:
Published in: | arXiv.org 2024-04 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We consider the constrained sampling problem where the goal is to sample from a target distribution \(\pi(x)\propto e^{-f(x)}\) when \(x\) is constrained to lie on a convex body \(\mathcal{C}\). Motivated by penalty methods from continuous optimization, we propose penalized Langevin Dynamics (PLD) and penalized underdamped Langevin Monte Carlo (PULMC) methods that convert the constrained sampling problem into an unconstrained sampling problem by introducing a penalty function for constraint violations. When \(f\) is smooth and gradients are available, we get \(\tilde{\mathcal{O}}(d/\varepsilon^{10})\) iteration complexity for PLD to sample the target up to an \(\varepsilon\)-error where the error is measured in the TV distance and \(\tilde{\mathcal{O}}(\cdot)\) hides logarithmic factors. For PULMC, we improve the result to \(\tilde{\mathcal{O}}(\sqrt{d}/\varepsilon^{7})\) when the Hessian of \(f\) is Lipschitz and the boundary of \(\mathcal{C}\) is sufficiently smooth. To our knowledge, these are the first convergence results for underdamped Langevin Monte Carlo methods in the constrained sampling that handle non-convex \(f\) and provide guarantees with the best dimension dependency among existing methods with deterministic gradient. If unbiased stochastic estimates of the gradient of \(f\) are available, we propose PSGLD and PSGULMC methods that can handle stochastic gradients and are scaleable to large datasets without requiring Metropolis-Hasting correction steps. For PSGLD and PSGULMC, when \(f\) is strongly convex and smooth, we obtain \(\tilde{\mathcal{O}}(d/\varepsilon^{18})\) and \(\tilde{\mathcal{O}}(d\sqrt{d}/\varepsilon^{39})\) iteration complexity in W2 distance. When \(f\) is smooth and can be non-convex, we provide finite-time performance bounds and iteration complexity results. Finally, we illustrate the performance on Bayesian LASSO regression and Bayesian constrained deep learning problems. |
---|---|
ISSN: | 2331-8422 |