Loading…

Measure Transformer Semantics for Bayesian Machine Learning

The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayes...

Full description

Saved in:
Bibliographic Details
Published in:Logical methods in computer science 2013-09, Vol.9, Issue 3 (3), p.11
Main Authors: Borgström, Johannes, Gordon, Andrew D, Greenberg, Michael, Margetson, James, Van Gael, Jurgen
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c388t-c03f70ec31365361c0033f1f705b9ff024cf67b50591a22bfaa90ce9f426d5433
cites
container_end_page
container_issue 3
container_start_page 11
container_title Logical methods in computer science
container_volume 9, Issue 3
creator Borgström, Johannes
Gordon, Andrew D
Greenberg, Michael
Margetson, James
Van Gael, Jurgen
description The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.
doi_str_mv 10.2168/LMCS-9(3:11)2013
format article
fullrecord <record><control><sourceid>swepub_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_718952f3bd0343d9899c8284a7d05847</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_718952f3bd0343d9899c8284a7d05847</doaj_id><sourcerecordid>oai_DiVA_org_uu_201425</sourcerecordid><originalsourceid>FETCH-LOGICAL-c388t-c03f70ec31365361c0033f1f705b9ff024cf67b50591a22bfaa90ce9f426d5433</originalsourceid><addsrcrecordid>eNpVkM1PwzAMxSsEEhPszrFHkCgkcdImcBrja1InDhtcIzdNRqetnZJVaP897YYQ-GLr6fln60XRBSU3jKbyNp-OZ4m6hDtKrxihcBQNqExJIlTGj__Mp9EwhCXpCoBKlg6i-6nF0Hobzz3WwTV-bX08s2ust5UJcSfED7izocI6nqL5rGob5xZ9XdWL8-jE4SrY4U8_i96fn-bj1yR_e5mMR3liQMptYgi4jFgDFFIBKTX9dUc7TRTKOcK4cWlWCCIURcYKh6iIscpxlpaCA5xFkwO3bHCpN75ao9_pBiu9Fxq_0Oi7d1dWZ1QqwRwUJQEOpZJKGckkx6wkQvKsY10fWOHLbtriH-2x-hjtaW2ruxQ5E52dHOzGNyF4634XKNF99LqPXisNmtJ-CeAbCm904Q</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Measure Transformer Semantics for Bayesian Machine Learning</title><source>EZB Free E-Journals</source><creator>Borgström, Johannes ; Gordon, Andrew D ; Greenberg, Michael ; Margetson, James ; Van Gael, Jurgen</creator><creatorcontrib>Borgström, Johannes ; Gordon, Andrew D ; Greenberg, Michael ; Margetson, James ; Van Gael, Jurgen</creatorcontrib><description>The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.</description><identifier>ISSN: 1860-5974</identifier><identifier>EISSN: 1860-5974</identifier><identifier>DOI: 10.2168/LMCS-9(3:11)2013</identifier><language>eng</language><publisher>Logical Methods in Computer Science e.V</publisher><subject>Bayesian modelling ; Computer Science ; computer science - artificial intelligence ; computer science - logic in computer science ; computer science - programming languages ; Datavetenskap ; Machine learning ; Probabilistic programming ; Programming languages</subject><ispartof>Logical methods in computer science, 2013-09, Vol.9, Issue 3 (3), p.11</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c388t-c03f70ec31365361c0033f1f705b9ff024cf67b50591a22bfaa90ce9f426d5433</citedby><orcidid>0000-0003-0014-7670</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27924,27925</link.rule.ids><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-201425$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Borgström, Johannes</creatorcontrib><creatorcontrib>Gordon, Andrew D</creatorcontrib><creatorcontrib>Greenberg, Michael</creatorcontrib><creatorcontrib>Margetson, James</creatorcontrib><creatorcontrib>Van Gael, Jurgen</creatorcontrib><title>Measure Transformer Semantics for Bayesian Machine Learning</title><title>Logical methods in computer science</title><description>The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.</description><subject>Bayesian modelling</subject><subject>Computer Science</subject><subject>computer science - artificial intelligence</subject><subject>computer science - logic in computer science</subject><subject>computer science - programming languages</subject><subject>Datavetenskap</subject><subject>Machine learning</subject><subject>Probabilistic programming</subject><subject>Programming languages</subject><issn>1860-5974</issn><issn>1860-5974</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2013</creationdate><recordtype>article</recordtype><sourceid>DOA</sourceid><recordid>eNpVkM1PwzAMxSsEEhPszrFHkCgkcdImcBrja1InDhtcIzdNRqetnZJVaP897YYQ-GLr6fln60XRBSU3jKbyNp-OZ4m6hDtKrxihcBQNqExJIlTGj__Mp9EwhCXpCoBKlg6i-6nF0Hobzz3WwTV-bX08s2ust5UJcSfED7izocI6nqL5rGob5xZ9XdWL8-jE4SrY4U8_i96fn-bj1yR_e5mMR3liQMptYgi4jFgDFFIBKTX9dUc7TRTKOcK4cWlWCCIURcYKh6iIscpxlpaCA5xFkwO3bHCpN75ao9_pBiu9Fxq_0Oi7d1dWZ1QqwRwUJQEOpZJKGckkx6wkQvKsY10fWOHLbtriH-2x-hjtaW2ruxQ5E52dHOzGNyF4634XKNF99LqPXisNmtJ-CeAbCm904Q</recordid><startdate>20130909</startdate><enddate>20130909</enddate><creator>Borgström, Johannes</creator><creator>Gordon, Andrew D</creator><creator>Greenberg, Michael</creator><creator>Margetson, James</creator><creator>Van Gael, Jurgen</creator><general>Logical Methods in Computer Science e.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ADTPV</scope><scope>AOWAS</scope><scope>DF2</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0003-0014-7670</orcidid></search><sort><creationdate>20130909</creationdate><title>Measure Transformer Semantics for Bayesian Machine Learning</title><author>Borgström, Johannes ; Gordon, Andrew D ; Greenberg, Michael ; Margetson, James ; Van Gael, Jurgen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c388t-c03f70ec31365361c0033f1f705b9ff024cf67b50591a22bfaa90ce9f426d5433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2013</creationdate><topic>Bayesian modelling</topic><topic>Computer Science</topic><topic>computer science - artificial intelligence</topic><topic>computer science - logic in computer science</topic><topic>computer science - programming languages</topic><topic>Datavetenskap</topic><topic>Machine learning</topic><topic>Probabilistic programming</topic><topic>Programming languages</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Borgström, Johannes</creatorcontrib><creatorcontrib>Gordon, Andrew D</creatorcontrib><creatorcontrib>Greenberg, Michael</creatorcontrib><creatorcontrib>Margetson, James</creatorcontrib><creatorcontrib>Van Gael, Jurgen</creatorcontrib><collection>CrossRef</collection><collection>SwePub</collection><collection>SwePub Articles</collection><collection>SWEPUB Uppsala universitet</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Logical methods in computer science</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Borgström, Johannes</au><au>Gordon, Andrew D</au><au>Greenberg, Michael</au><au>Margetson, James</au><au>Van Gael, Jurgen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Measure Transformer Semantics for Bayesian Machine Learning</atitle><jtitle>Logical methods in computer science</jtitle><date>2013-09-09</date><risdate>2013</risdate><volume>9, Issue 3</volume><issue>3</issue><spage>11</spage><pages>11-</pages><issn>1860-5974</issn><eissn>1860-5974</eissn><abstract>The Bayesian approach to machine learning amounts to computing posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define measure-transformer combinators inspired by theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zero-probability events. We compile our core language to a small imperative language that is processed by an existing inference engine for factor graphs, which are data structures that enable many efficient inference algorithms. This allows efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models.</abstract><pub>Logical Methods in Computer Science e.V</pub><doi>10.2168/LMCS-9(3:11)2013</doi><orcidid>https://orcid.org/0000-0003-0014-7670</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1860-5974
ispartof Logical methods in computer science, 2013-09, Vol.9, Issue 3 (3), p.11
issn 1860-5974
1860-5974
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_718952f3bd0343d9899c8284a7d05847
source EZB Free E-Journals
subjects Bayesian modelling
Computer Science
computer science - artificial intelligence
computer science - logic in computer science
computer science - programming languages
Datavetenskap
Machine learning
Probabilistic programming
Programming languages
title Measure Transformer Semantics for Bayesian Machine Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T06%3A28%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-swepub_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Measure%20Transformer%20Semantics%20for%20Bayesian%20Machine%20Learning&rft.jtitle=Logical%20methods%20in%20computer%20science&rft.au=Borgstr%C3%B6m,%20Johannes&rft.date=2013-09-09&rft.volume=9,%20Issue%203&rft.issue=3&rft.spage=11&rft.pages=11-&rft.issn=1860-5974&rft.eissn=1860-5974&rft_id=info:doi/10.2168/LMCS-9(3:11)2013&rft_dat=%3Cswepub_doaj_%3Eoai_DiVA_org_uu_201425%3C/swepub_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c388t-c03f70ec31365361c0033f1f705b9ff024cf67b50591a22bfaa90ce9f426d5433%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true