Loading…

Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization

We propose endogenous Bayesian risk minimization (EBRM) over policy sets as an approach to online learning across a wide range of settings. Many real-world online learning problems have complexities such as action- and belief-dependent rewards, time-discounting of reward, and heterogeneous costs for...

Full description

Saved in:
Bibliographic Details
Published in:Artificial intelligence 2024-05, Vol.330, p.104096, Article 104096
Main Authors: Jamieson, Stewart, How, Jonathan P., Girdhar, Yogesh
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c255t-2ae501b6a99b65399bc072131784ab9b327cf56c4b26719f24c295a82f7394013
container_end_page
container_issue
container_start_page 104096
container_title Artificial intelligence
container_volume 330
creator Jamieson, Stewart
How, Jonathan P.
Girdhar, Yogesh
description We propose endogenous Bayesian risk minimization (EBRM) over policy sets as an approach to online learning across a wide range of settings. Many real-world online learning problems have complexities such as action- and belief-dependent rewards, time-discounting of reward, and heterogeneous costs for actions and feedback; we find that existing online learning heuristics cannot leverage most problem-specific information, to the detriment of their performance. We introduce a belief-space Markov decision process (BMDP) model that can capture these complexities, and further apply the concepts of aleatoric, epistemic, and process risks to online learning. These risk functions describe the risk inherent to the learning problem, the risk due to the agent's lack of knowledge, and the relative quality of its policy, respectively. We demonstrate how computing and minimizing these risk functions guides the online learning agent towards the optimal exploration-exploitation trade-off in any stochastic online learning problem, constituting the basis of the EBRM approach. We also show how Bayes' risk, the minimization objective in stochastic online learning problems, can be decomposed into the aforementioned aleatoric, epistemic, and process risks. In simulation experiments, EBRM algorithms achieve state-of-the-art performance across various classical online learning problems, including Gaussian and Bernoulli multi-armed bandits, best-arm identification, mixed objectives with action- and belief-dependent rewards, and dynamic pricing, a finite partial monitoring problem. To our knowledge, it is also the first computationally efficient online learning approach that can provide online bounds on an algorithm's Bayes' risk. Finally, because the EBRM approach is parameterized by a set of policy algorithms, it can be extended to incorporate new developments in online learning algorithms, and is thus well-suited as the foundation for developing real-world learning agents.
doi_str_mv 10.1016/j.artint.2024.104096
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_artint_2024_104096</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0004370224000328</els_id><sourcerecordid>S0004370224000328</sourcerecordid><originalsourceid>FETCH-LOGICAL-c255t-2ae501b6a99b65399bc072131784ab9b327cf56c4b26719f24c295a82f7394013</originalsourceid><addsrcrecordid>eNp9kEFPAyEUhInRxFr9Bx74A1uBZZdyMdHGVpMmXvRMWJZtX91CA2isv16669nTezPJTCYfQreUzCih9d1upkMCl2aMMJ4tTmR9hiZ0LlghJKPnaEII4UUpCLtEVzHusiylpBPkl-BacBucthb7Q4K97rH9PvQ-6ATeFcMPaRA4Bd3awncd9q4HZ3Mq-M_NFj_qo42gHQ4QP7CNp54hoV2L9-BgDz-DcY0uOt1He_N3p-h9-fS2eC7Wr6uXxcO6MKyqUsG0rQhtai1lU1d5amOIYLSkYs51I5uSCdNVteENqwWVHeOGyUrPWSdKyQktp4iPvSb4GIPt1CHkTeGoKFEnaGqnRmjqBE2N0HLsfozZvO0LbFDRgHXGthCsSar18H_BLy5bea0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization</title><source>ScienceDirect Journals</source><creator>Jamieson, Stewart ; How, Jonathan P. ; Girdhar, Yogesh</creator><creatorcontrib>Jamieson, Stewart ; How, Jonathan P. ; Girdhar, Yogesh</creatorcontrib><description>We propose endogenous Bayesian risk minimization (EBRM) over policy sets as an approach to online learning across a wide range of settings. Many real-world online learning problems have complexities such as action- and belief-dependent rewards, time-discounting of reward, and heterogeneous costs for actions and feedback; we find that existing online learning heuristics cannot leverage most problem-specific information, to the detriment of their performance. We introduce a belief-space Markov decision process (BMDP) model that can capture these complexities, and further apply the concepts of aleatoric, epistemic, and process risks to online learning. These risk functions describe the risk inherent to the learning problem, the risk due to the agent's lack of knowledge, and the relative quality of its policy, respectively. We demonstrate how computing and minimizing these risk functions guides the online learning agent towards the optimal exploration-exploitation trade-off in any stochastic online learning problem, constituting the basis of the EBRM approach. We also show how Bayes' risk, the minimization objective in stochastic online learning problems, can be decomposed into the aforementioned aleatoric, epistemic, and process risks. In simulation experiments, EBRM algorithms achieve state-of-the-art performance across various classical online learning problems, including Gaussian and Bernoulli multi-armed bandits, best-arm identification, mixed objectives with action- and belief-dependent rewards, and dynamic pricing, a finite partial monitoring problem. To our knowledge, it is also the first computationally efficient online learning approach that can provide online bounds on an algorithm's Bayes' risk. Finally, because the EBRM approach is parameterized by a set of policy algorithms, it can be extended to incorporate new developments in online learning algorithms, and is thus well-suited as the foundation for developing real-world learning agents.</description><identifier>ISSN: 0004-3702</identifier><identifier>EISSN: 1872-7921</identifier><identifier>DOI: 10.1016/j.artint.2024.104096</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Bayesian risk ; Multi-armed bandits ; Partial monitoring ; Stochastic online learning</subject><ispartof>Artificial intelligence, 2024-05, Vol.330, p.104096, Article 104096</ispartof><rights>2024 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c255t-2ae501b6a99b65399bc072131784ab9b327cf56c4b26719f24c295a82f7394013</cites><orcidid>0000-0003-4842-0373 ; 0000-0001-9510-9639 ; 0000-0001-8576-1930</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27903,27904</link.rule.ids></links><search><creatorcontrib>Jamieson, Stewart</creatorcontrib><creatorcontrib>How, Jonathan P.</creatorcontrib><creatorcontrib>Girdhar, Yogesh</creatorcontrib><title>Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization</title><title>Artificial intelligence</title><description>We propose endogenous Bayesian risk minimization (EBRM) over policy sets as an approach to online learning across a wide range of settings. Many real-world online learning problems have complexities such as action- and belief-dependent rewards, time-discounting of reward, and heterogeneous costs for actions and feedback; we find that existing online learning heuristics cannot leverage most problem-specific information, to the detriment of their performance. We introduce a belief-space Markov decision process (BMDP) model that can capture these complexities, and further apply the concepts of aleatoric, epistemic, and process risks to online learning. These risk functions describe the risk inherent to the learning problem, the risk due to the agent's lack of knowledge, and the relative quality of its policy, respectively. We demonstrate how computing and minimizing these risk functions guides the online learning agent towards the optimal exploration-exploitation trade-off in any stochastic online learning problem, constituting the basis of the EBRM approach. We also show how Bayes' risk, the minimization objective in stochastic online learning problems, can be decomposed into the aforementioned aleatoric, epistemic, and process risks. In simulation experiments, EBRM algorithms achieve state-of-the-art performance across various classical online learning problems, including Gaussian and Bernoulli multi-armed bandits, best-arm identification, mixed objectives with action- and belief-dependent rewards, and dynamic pricing, a finite partial monitoring problem. To our knowledge, it is also the first computationally efficient online learning approach that can provide online bounds on an algorithm's Bayes' risk. Finally, because the EBRM approach is parameterized by a set of policy algorithms, it can be extended to incorporate new developments in online learning algorithms, and is thus well-suited as the foundation for developing real-world learning agents.</description><subject>Bayesian risk</subject><subject>Multi-armed bandits</subject><subject>Partial monitoring</subject><subject>Stochastic online learning</subject><issn>0004-3702</issn><issn>1872-7921</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kEFPAyEUhInRxFr9Bx74A1uBZZdyMdHGVpMmXvRMWJZtX91CA2isv16669nTezPJTCYfQreUzCih9d1upkMCl2aMMJ4tTmR9hiZ0LlghJKPnaEII4UUpCLtEVzHusiylpBPkl-BacBucthb7Q4K97rH9PvQ-6ATeFcMPaRA4Bd3awncd9q4HZ3Mq-M_NFj_qo42gHQ4QP7CNp54hoV2L9-BgDz-DcY0uOt1He_N3p-h9-fS2eC7Wr6uXxcO6MKyqUsG0rQhtai1lU1d5amOIYLSkYs51I5uSCdNVteENqwWVHeOGyUrPWSdKyQktp4iPvSb4GIPt1CHkTeGoKFEnaGqnRmjqBE2N0HLsfozZvO0LbFDRgHXGthCsSar18H_BLy5bea0</recordid><startdate>202405</startdate><enddate>202405</enddate><creator>Jamieson, Stewart</creator><creator>How, Jonathan P.</creator><creator>Girdhar, Yogesh</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-4842-0373</orcidid><orcidid>https://orcid.org/0000-0001-9510-9639</orcidid><orcidid>https://orcid.org/0000-0001-8576-1930</orcidid></search><sort><creationdate>202405</creationdate><title>Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization</title><author>Jamieson, Stewart ; How, Jonathan P. ; Girdhar, Yogesh</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c255t-2ae501b6a99b65399bc072131784ab9b327cf56c4b26719f24c295a82f7394013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Bayesian risk</topic><topic>Multi-armed bandits</topic><topic>Partial monitoring</topic><topic>Stochastic online learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jamieson, Stewart</creatorcontrib><creatorcontrib>How, Jonathan P.</creatorcontrib><creatorcontrib>Girdhar, Yogesh</creatorcontrib><collection>CrossRef</collection><jtitle>Artificial intelligence</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jamieson, Stewart</au><au>How, Jonathan P.</au><au>Girdhar, Yogesh</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization</atitle><jtitle>Artificial intelligence</jtitle><date>2024-05</date><risdate>2024</risdate><volume>330</volume><spage>104096</spage><pages>104096-</pages><artnum>104096</artnum><issn>0004-3702</issn><eissn>1872-7921</eissn><abstract>We propose endogenous Bayesian risk minimization (EBRM) over policy sets as an approach to online learning across a wide range of settings. Many real-world online learning problems have complexities such as action- and belief-dependent rewards, time-discounting of reward, and heterogeneous costs for actions and feedback; we find that existing online learning heuristics cannot leverage most problem-specific information, to the detriment of their performance. We introduce a belief-space Markov decision process (BMDP) model that can capture these complexities, and further apply the concepts of aleatoric, epistemic, and process risks to online learning. These risk functions describe the risk inherent to the learning problem, the risk due to the agent's lack of knowledge, and the relative quality of its policy, respectively. We demonstrate how computing and minimizing these risk functions guides the online learning agent towards the optimal exploration-exploitation trade-off in any stochastic online learning problem, constituting the basis of the EBRM approach. We also show how Bayes' risk, the minimization objective in stochastic online learning problems, can be decomposed into the aforementioned aleatoric, epistemic, and process risks. In simulation experiments, EBRM algorithms achieve state-of-the-art performance across various classical online learning problems, including Gaussian and Bernoulli multi-armed bandits, best-arm identification, mixed objectives with action- and belief-dependent rewards, and dynamic pricing, a finite partial monitoring problem. To our knowledge, it is also the first computationally efficient online learning approach that can provide online bounds on an algorithm's Bayes' risk. Finally, because the EBRM approach is parameterized by a set of policy algorithms, it can be extended to incorporate new developments in online learning algorithms, and is thus well-suited as the foundation for developing real-world learning agents.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.artint.2024.104096</doi><orcidid>https://orcid.org/0000-0003-4842-0373</orcidid><orcidid>https://orcid.org/0000-0001-9510-9639</orcidid><orcidid>https://orcid.org/0000-0001-8576-1930</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0004-3702
ispartof Artificial intelligence, 2024-05, Vol.330, p.104096, Article 104096
issn 0004-3702
1872-7921
language eng
recordid cdi_crossref_primary_10_1016_j_artint_2024_104096
source ScienceDirect Journals
subjects Bayesian risk
Multi-armed bandits
Partial monitoring
Stochastic online learning
title Finding the optimal exploration-exploitation trade-off online through Bayesian risk estimation and minimization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T16%3A17%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Finding%20the%20optimal%20exploration-exploitation%20trade-off%20online%20through%20Bayesian%20risk%20estimation%20and%20minimization&rft.jtitle=Artificial%20intelligence&rft.au=Jamieson,%20Stewart&rft.date=2024-05&rft.volume=330&rft.spage=104096&rft.pages=104096-&rft.artnum=104096&rft.issn=0004-3702&rft.eissn=1872-7921&rft_id=info:doi/10.1016/j.artint.2024.104096&rft_dat=%3Celsevier_cross%3ES0004370224000328%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c255t-2ae501b6a99b65399bc072131784ab9b327cf56c4b26719f24c295a82f7394013%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true