Loading…
Efficient Sampling of Stochastic Differential Equations with Positive Semi-Definite Models
This paper deals with the problem of efficient sampling from a stochastic differential equation, given the drift function and the diffusion matrix. The proposed approach leverages a recent model for probabilities \cite{rudi2021psd} (the positive semi-definite -- PSD model) from which it is possible...
Saved in:
Published in: | arXiv.org 2023-05 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper deals with the problem of efficient sampling from a stochastic differential equation, given the drift function and the diffusion matrix. The proposed approach leverages a recent model for probabilities \cite{rudi2021psd} (the positive semi-definite -- PSD model) from which it is possible to obtain independent and identically distributed (i.i.d.) samples at precision \(\varepsilon\) with a cost that is \(m^2 d \log(1/\varepsilon)\) where \(m\) is the dimension of the model, \(d\) the dimension of the space. The proposed approach consists in: first, computing the PSD model that satisfies the Fokker-Planck equation (or its fractional variant) associated with the SDE, up to error \(\varepsilon\), and then sampling from the resulting PSD model. Assuming some regularity of the Fokker-Planck solution (i.e. \(\beta\)-times differentiability plus some geometric condition on its zeros) We obtain an algorithm that: (a) in the preparatory phase obtains a PSD model with L2 distance \(\varepsilon\) from the solution of the equation, with a model of dimension \(m = \varepsilon^{-(d+1)/(\beta-2s)} (\log(1/\varepsilon))^{d+1}\) where \(1/2\leq s\leq1\) is the fractional power to the Laplacian, and total computational complexity of \(O(m^{3.5} \log(1/\varepsilon))\) and then (b) for Fokker-Planck equation, it is able to produce i.i.d.\ samples with error \(\varepsilon\) in Wasserstein-1 distance, with a cost that is \(O(d \varepsilon^{-2(d+1)/\beta-2} \log(1/\varepsilon)^{2d+3})\) per sample. This means that, if the probability associated with the SDE is somewhat regular, i.e. \(\beta \geq 4d+2\), then the algorithm requires \(O(\varepsilon^{-0.88} \log(1/\varepsilon)^{4.5d})\) in the preparatory phase, and \(O(\varepsilon^{-1/2}\log(1/\varepsilon)^{2d+2})\) for each sample. Our results suggest that as the true solution gets smoother, we can circumvent the curse of dimensionality without requiring any sort of convexity. |
---|---|
ISSN: | 2331-8422 |