Loading…

Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels

Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P . However, there are many situations for which it is impractical or impossible to draw from the transition kernel P . For instance, this is the case w...

Full description

Saved in:
Bibliographic Details
Published in:Statistics and computing 2016-01, Vol.26 (1-2), p.29-47
Main Authors: Alquier, P., Friel, N., Everitt, R., Boland, A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Monte Carlo algorithms often aim to draw from a distribution π by simulating a Markov chain with transition kernel P such that π is invariant under P . However, there are many situations for which it is impractical or impossible to draw from the transition kernel P . For instance, this is the case with massive datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable likelihood models arising from, for example, Gibbs random fields, such as those found in spatial statistics and network analysis. A natural approach in these cases is to replace P by an approximation P ^ . Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how ‘close’ the chain given by the transition kernel P ^ is to the chain given by P . We apply these results to several examples from spatial statistics and network analysis.
ISSN:0960-3174
1573-1375
DOI:10.1007/s11222-014-9521-x