Loading…
Measure Transformer Semantics for Bayesian Machine Learning
The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayes...
Saved in:
Published in: | Logical methods in computer science 2013-09, Vol.9, Issue 3 (3), p.11 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Bayesian approach to machine learning amounts to computing posterior
distributions of random variables from a probabilistic model of how the
variables are related (that is, a prior distribution) and a set of observations
of variables. There is a trend in machine learning towards expressing Bayesian
models as probabilistic programs. As a foundation for this kind of programming,
we propose a core functional calculus with primitives for sampling prior
distributions and observing variables. We define measure-transformer
combinators inspired by theorems in measure theory, and use these to give a
rigorous semantics to our core calculus. The original features of our semantics
include its support for discrete, continuous, and hybrid measures, and, in
particular, for observations of zero-probability events. We compile our core
language to a small imperative language that is processed by an existing
inference engine for factor graphs, which are data structures that enable many
efficient inference algorithms. This allows efficient approximate inference of
posterior marginal distributions, treating thousands of observations per second
for large instances of realistic models. |
---|---|
ISSN: | 1860-5974 1860-5974 |
DOI: | 10.2168/LMCS-9(3:11)2013 |