Loading…

Divergence-Based Motivation for Online EM and Combining Hidden Variable Models

Expectation-Maximization (EM) is a prominent approach for parameter estimation of hidden (aka latent) variable models. Given the full batch of data, EM forms an upper-bound of the negative log-likelihood of the model at each iteration and updates to the minimizer of this upper-bound. We first provid...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-02
Main Authors: Amid, Ehsan, Warmuth, Manfred K
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Amid, Ehsan
Warmuth, Manfred K
description Expectation-Maximization (EM) is a prominent approach for parameter estimation of hidden (aka latent) variable models. Given the full batch of data, EM forms an upper-bound of the negative log-likelihood of the model at each iteration and updates to the minimizer of this upper-bound. We first provide a "model level" interpretation of the EM upper-bound as sum of relative entropy divergences to a set of singleton models, induced by the set of observations. Our alternative motivation unifies the "observation level" and the "model level" view of the EM. As a result, we formulate an online version of the EM algorithm by adding an analogous inertia term which corresponds to the relative entropy divergence to the old model. Our motivation is more widely applicable than the previous approaches and leads to simple online updates for mixture of exponential distributions, hidden Markov models, and the first known online update for Kalman filters. Additionally, the finite sample form of the inertia term lets us derive online updates when there is no closed-form solution. Finally, we extend the analysis to the distributed setting where we motivate a systematic way of combining multiple hidden variable models. Experimentally, we validate the results on synthetic as well as real-world datasets.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2179189231</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2179189231</sourcerecordid><originalsourceid>FETCH-proquest_journals_21791892313</originalsourceid><addsrcrecordid>eNqNi7EOgjAUABsTE4nyD02cSWgrAquIYUEX40qKfZCS-qot8P0y-AFON9zdigRcCBZlB843JPR-iOOYH1OeJCIg17OewfWAT4hO0oOitR31LEdtkXbW0RsajUDLmkpUtLCvVqPGnlZaKUD6kE7L1sCyKTB-R9adNB7CH7dkfynvRRW9nf1M4MdmsJPDRTWcpTnLci6Y-K_6AmpYPVg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2179189231</pqid></control><display><type>article</type><title>Divergence-Based Motivation for Online EM and Combining Hidden Variable Models</title><source>Publicly Available Content (ProQuest)</source><creator>Amid, Ehsan ; Warmuth, Manfred K</creator><creatorcontrib>Amid, Ehsan ; Warmuth, Manfred K</creatorcontrib><description>Expectation-Maximization (EM) is a prominent approach for parameter estimation of hidden (aka latent) variable models. Given the full batch of data, EM forms an upper-bound of the negative log-likelihood of the model at each iteration and updates to the minimizer of this upper-bound. We first provide a "model level" interpretation of the EM upper-bound as sum of relative entropy divergences to a set of singleton models, induced by the set of observations. Our alternative motivation unifies the "observation level" and the "model level" view of the EM. As a result, we formulate an online version of the EM algorithm by adding an analogous inertia term which corresponds to the relative entropy divergence to the old model. Our motivation is more widely applicable than the previous approaches and leads to simple online updates for mixture of exponential distributions, hidden Markov models, and the first known online update for Kalman filters. Additionally, the finite sample form of the inertia term lets us derive online updates when there is no closed-form solution. Finally, we extend the analysis to the distributed setting where we motivate a systematic way of combining multiple hidden variable models. Experimentally, we validate the results on synthetic as well as real-world datasets.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer simulation ; Divergence ; Entropy ; Inertia ; Iterative methods ; Kalman filters ; Markov chains ; Motivation ; Parameter estimation ; Upper bounds</subject><ispartof>arXiv.org, 2020-02</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2179189231?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Amid, Ehsan</creatorcontrib><creatorcontrib>Warmuth, Manfred K</creatorcontrib><title>Divergence-Based Motivation for Online EM and Combining Hidden Variable Models</title><title>arXiv.org</title><description>Expectation-Maximization (EM) is a prominent approach for parameter estimation of hidden (aka latent) variable models. Given the full batch of data, EM forms an upper-bound of the negative log-likelihood of the model at each iteration and updates to the minimizer of this upper-bound. We first provide a "model level" interpretation of the EM upper-bound as sum of relative entropy divergences to a set of singleton models, induced by the set of observations. Our alternative motivation unifies the "observation level" and the "model level" view of the EM. As a result, we formulate an online version of the EM algorithm by adding an analogous inertia term which corresponds to the relative entropy divergence to the old model. Our motivation is more widely applicable than the previous approaches and leads to simple online updates for mixture of exponential distributions, hidden Markov models, and the first known online update for Kalman filters. Additionally, the finite sample form of the inertia term lets us derive online updates when there is no closed-form solution. Finally, we extend the analysis to the distributed setting where we motivate a systematic way of combining multiple hidden variable models. Experimentally, we validate the results on synthetic as well as real-world datasets.</description><subject>Algorithms</subject><subject>Computer simulation</subject><subject>Divergence</subject><subject>Entropy</subject><subject>Inertia</subject><subject>Iterative methods</subject><subject>Kalman filters</subject><subject>Markov chains</subject><subject>Motivation</subject><subject>Parameter estimation</subject><subject>Upper bounds</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi7EOgjAUABsTE4nyD02cSWgrAquIYUEX40qKfZCS-qot8P0y-AFON9zdigRcCBZlB843JPR-iOOYH1OeJCIg17OewfWAT4hO0oOitR31LEdtkXbW0RsajUDLmkpUtLCvVqPGnlZaKUD6kE7L1sCyKTB-R9adNB7CH7dkfynvRRW9nf1M4MdmsJPDRTWcpTnLci6Y-K_6AmpYPVg</recordid><startdate>20200221</startdate><enddate>20200221</enddate><creator>Amid, Ehsan</creator><creator>Warmuth, Manfred K</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200221</creationdate><title>Divergence-Based Motivation for Online EM and Combining Hidden Variable Models</title><author>Amid, Ehsan ; Warmuth, Manfred K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_21791892313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Computer simulation</topic><topic>Divergence</topic><topic>Entropy</topic><topic>Inertia</topic><topic>Iterative methods</topic><topic>Kalman filters</topic><topic>Markov chains</topic><topic>Motivation</topic><topic>Parameter estimation</topic><topic>Upper bounds</topic><toplevel>online_resources</toplevel><creatorcontrib>Amid, Ehsan</creatorcontrib><creatorcontrib>Warmuth, Manfred K</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Database (Proquest)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Amid, Ehsan</au><au>Warmuth, Manfred K</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Divergence-Based Motivation for Online EM and Combining Hidden Variable Models</atitle><jtitle>arXiv.org</jtitle><date>2020-02-21</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>Expectation-Maximization (EM) is a prominent approach for parameter estimation of hidden (aka latent) variable models. Given the full batch of data, EM forms an upper-bound of the negative log-likelihood of the model at each iteration and updates to the minimizer of this upper-bound. We first provide a "model level" interpretation of the EM upper-bound as sum of relative entropy divergences to a set of singleton models, induced by the set of observations. Our alternative motivation unifies the "observation level" and the "model level" view of the EM. As a result, we formulate an online version of the EM algorithm by adding an analogous inertia term which corresponds to the relative entropy divergence to the old model. Our motivation is more widely applicable than the previous approaches and leads to simple online updates for mixture of exponential distributions, hidden Markov models, and the first known online update for Kalman filters. Additionally, the finite sample form of the inertia term lets us derive online updates when there is no closed-form solution. Finally, we extend the analysis to the distributed setting where we motivate a systematic way of combining multiple hidden variable models. Experimentally, we validate the results on synthetic as well as real-world datasets.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2179189231
source Publicly Available Content (ProQuest)
subjects Algorithms
Computer simulation
Divergence
Entropy
Inertia
Iterative methods
Kalman filters
Markov chains
Motivation
Parameter estimation
Upper bounds
title Divergence-Based Motivation for Online EM and Combining Hidden Variable Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T23%3A14%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Divergence-Based%20Motivation%20for%20Online%20EM%20and%20Combining%20Hidden%20Variable%20Models&rft.jtitle=arXiv.org&rft.au=Amid,%20Ehsan&rft.date=2020-02-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2179189231%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_21791892313%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2179189231&rft_id=info:pmid/&rfr_iscdi=true