Loading…

Unnormalized Variational Bayes

We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable \(Y=X+N(0,\sigma^2 I_d)\) and using the evidence lower bound (ELBO), computed b...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-07
Main Author: Saremi, Saeed
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Saremi, Saeed
description We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable \(Y=X+N(0,\sigma^2 I_d)\) and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of \(Y\) which is then used to estimate \(X\) with the empirical Bayes least-squares estimator. In this intriguing setup, the \(\textit{gradient}\) of the ELBO with respect to noisy inputs plays the central role in learning the energy function. Empirically, we demonstrate that UVB has a higher capacity to approximate energy functions than the parametrization with MLPs as done in neural empirical Bayes (DEEN). We especially showcase \(\sigma=1\), where the differences between UVB and DEEN become visible and qualitative in the denoising experiments. For this high level of noise, the distribution of \(Y\) is very smoothed and we demonstrate that one can traverse in a single run \(-\) without a restart \(-\) all MNIST classes in a variety of styles via walk-jump sampling with a fast-mixing Langevin MCMC sampler. We finish by probing the encoder/decoder of the trained models and confirm UVB \(\neq\) VAE.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2429341411</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2429341411</sourcerecordid><originalsourceid>FETCH-proquest_journals_24293414113</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQC83Lyy_KTczJrEpNUQhLLMpMLMnMz0vMUXBKrEwt5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFQOXF8UYmRpbGJoYmhobGxKkCAK1oLIc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2429341411</pqid></control><display><type>article</type><title>Unnormalized Variational Bayes</title><source>Publicly Available Content (ProQuest)</source><creator>Saremi, Saeed</creator><creatorcontrib>Saremi, Saeed</creatorcontrib><description>We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable \(Y=X+N(0,\sigma^2 I_d)\) and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of \(Y\) which is then used to estimate \(X\) with the empirical Bayes least-squares estimator. In this intriguing setup, the \(\textit{gradient}\) of the ELBO with respect to noisy inputs plays the central role in learning the energy function. Empirically, we demonstrate that UVB has a higher capacity to approximate energy functions than the parametrization with MLPs as done in neural empirical Bayes (DEEN). We especially showcase \(\sigma=1\), where the differences between UVB and DEEN become visible and qualitative in the denoising experiments. For this high level of noise, the distribution of \(Y\) is very smoothed and we demonstrate that one can traverse in a single run \(-\) without a restart \(-\) all MNIST classes in a variety of styles via walk-jump sampling with a fast-mixing Langevin MCMC sampler. We finish by probing the encoder/decoder of the trained models and confirm UVB \(\neq\) VAE.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Coders ; Encoders-Decoders ; Lower bounds ; Noise reduction ; Parameterization ; Random variables</subject><ispartof>arXiv.org, 2020-07</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2429341411?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Saremi, Saeed</creatorcontrib><title>Unnormalized Variational Bayes</title><title>arXiv.org</title><description>We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable \(Y=X+N(0,\sigma^2 I_d)\) and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of \(Y\) which is then used to estimate \(X\) with the empirical Bayes least-squares estimator. In this intriguing setup, the \(\textit{gradient}\) of the ELBO with respect to noisy inputs plays the central role in learning the energy function. Empirically, we demonstrate that UVB has a higher capacity to approximate energy functions than the parametrization with MLPs as done in neural empirical Bayes (DEEN). We especially showcase \(\sigma=1\), where the differences between UVB and DEEN become visible and qualitative in the denoising experiments. For this high level of noise, the distribution of \(Y\) is very smoothed and we demonstrate that one can traverse in a single run \(-\) without a restart \(-\) all MNIST classes in a variety of styles via walk-jump sampling with a fast-mixing Langevin MCMC sampler. We finish by probing the encoder/decoder of the trained models and confirm UVB \(\neq\) VAE.</description><subject>Coders</subject><subject>Encoders-Decoders</subject><subject>Lower bounds</subject><subject>Noise reduction</subject><subject>Parameterization</subject><subject>Random variables</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mSQC83Lyy_KTczJrEpNUQhLLMpMLMnMz0vMUXBKrEwt5mFgTUvMKU7lhdLcDMpuriHOHroFRfmFpanFJfFZ-aVFQOXF8UYmRpbGJoYmhobGxKkCAK1oLIc</recordid><startdate>20200729</startdate><enddate>20200729</enddate><creator>Saremi, Saeed</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200729</creationdate><title>Unnormalized Variational Bayes</title><author>Saremi, Saeed</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24293414113</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Coders</topic><topic>Encoders-Decoders</topic><topic>Lower bounds</topic><topic>Noise reduction</topic><topic>Parameterization</topic><topic>Random variables</topic><toplevel>online_resources</toplevel><creatorcontrib>Saremi, Saeed</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Saremi, Saeed</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Unnormalized Variational Bayes</atitle><jtitle>arXiv.org</jtitle><date>2020-07-29</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>We unify empirical Bayes and variational Bayes for approximating unnormalized densities. This framework, named unnormalized variational Bayes (UVB), is based on formulating a latent variable model for the random variable \(Y=X+N(0,\sigma^2 I_d)\) and using the evidence lower bound (ELBO), computed by a variational autoencoder, as a parametrization of the energy function of \(Y\) which is then used to estimate \(X\) with the empirical Bayes least-squares estimator. In this intriguing setup, the \(\textit{gradient}\) of the ELBO with respect to noisy inputs plays the central role in learning the energy function. Empirically, we demonstrate that UVB has a higher capacity to approximate energy functions than the parametrization with MLPs as done in neural empirical Bayes (DEEN). We especially showcase \(\sigma=1\), where the differences between UVB and DEEN become visible and qualitative in the denoising experiments. For this high level of noise, the distribution of \(Y\) is very smoothed and we demonstrate that one can traverse in a single run \(-\) without a restart \(-\) all MNIST classes in a variety of styles via walk-jump sampling with a fast-mixing Langevin MCMC sampler. We finish by probing the encoder/decoder of the trained models and confirm UVB \(\neq\) VAE.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_2429341411
source Publicly Available Content (ProQuest)
subjects Coders
Encoders-Decoders
Lower bounds
Noise reduction
Parameterization
Random variables
title Unnormalized Variational Bayes
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-20T04%3A37%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Unnormalized%20Variational%20Bayes&rft.jtitle=arXiv.org&rft.au=Saremi,%20Saeed&rft.date=2020-07-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2429341411%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24293414113%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2429341411&rft_id=info:pmid/&rfr_iscdi=true