Loading…

Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison

Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. As leave-one-out cross-validation is based on finite observed data, there is uncertainty about the expected predictive performance on new da...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-10
Main Authors: Sivula, Tuomas, Magnusson, Måns, Asael Alonzo Matamoros, Vehtari, Aki
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Sivula, Tuomas
Magnusson, Måns
Asael Alonzo Matamoros
Vehtari, Aki
description Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. As leave-one-out cross-validation is based on finite observed data, there is uncertainty about the expected predictive performance on new data. By modeling this uncertainty when comparing two models, we can compute the probability that one model has a better predictive performance than the other. Modeling this uncertainty well is not trivial, and for example, it is known that the commonly used standard error estimate is often too small. We study the properties of the Bayesian LOO-CV estimator and the related uncertainty estimates when comparing two models. We provide new results of the properties both theoretically in the linear regression case and empirically for multiple different models and discuss the challenges of modeling the uncertainty. We show that problematic cases include: comparing models with similar predictions, misspecified models, and small data. In these cases, there is a weak connection in the skewness of the individual leave-one-out terms and the distribution of the error of the Bayesian LOO-CV estimator. We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations. Based on the results, we also provide practical recommendations for the users of Bayesian LOO-CV for model comparison.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2436977734</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2436977734</sourcerecordid><originalsourceid>FETCH-proquest_journals_24369777343</originalsourceid><addsrcrecordid>eNqNirEKwjAUAIMgWLT_EHAO1KRtdLUoDhYX7Voe9gkpNal5qdC_t4If4HDccDdjkVRqI7aplAsWE7VJkshcyyxTEStv9o4-gLFh5MbyPYxIBiw_I7xRXOzEEHjhHZGooDMNBOO-H2HDS9dgxwv37MEbcnbF5g_oCOOfl2x9PFyLk-i9ew1IoW7d4O2UapmqfKe1Vqn67_oAXRg9gQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2436977734</pqid></control><display><type>article</type><title>Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison</title><source>Publicly Available Content (ProQuest)</source><creator>Sivula, Tuomas ; Magnusson, Måns ; Asael Alonzo Matamoros ; Vehtari, Aki</creator><creatorcontrib>Sivula, Tuomas ; Magnusson, Måns ; Asael Alonzo Matamoros ; Vehtari, Aki</creatorcontrib><description>Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. As leave-one-out cross-validation is based on finite observed data, there is uncertainty about the expected predictive performance on new data. By modeling this uncertainty when comparing two models, we can compute the probability that one model has a better predictive performance than the other. Modeling this uncertainty well is not trivial, and for example, it is known that the commonly used standard error estimate is often too small. We study the properties of the Bayesian LOO-CV estimator and the related uncertainty estimates when comparing two models. We provide new results of the properties both theoretically in the linear regression case and empirically for multiple different models and discuss the challenges of modeling the uncertainty. We show that problematic cases include: comparing models with similar predictions, misspecified models, and small data. In these cases, there is a weak connection in the skewness of the individual leave-one-out terms and the distribution of the error of the Bayesian LOO-CV estimator. We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations. Based on the results, we also provide practical recommendations for the users of Bayesian LOO-CV for model comparison.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Bayesian analysis ; Estimation ; Frequency analysis ; Performance prediction ; Skewness ; Standard error ; Uncertainty</subject><ispartof>arXiv.org, 2023-10</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2436977734?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Sivula, Tuomas</creatorcontrib><creatorcontrib>Magnusson, Måns</creatorcontrib><creatorcontrib>Asael Alonzo Matamoros</creatorcontrib><creatorcontrib>Vehtari, Aki</creatorcontrib><title>Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison</title><title>arXiv.org</title><description>Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. As leave-one-out cross-validation is based on finite observed data, there is uncertainty about the expected predictive performance on new data. By modeling this uncertainty when comparing two models, we can compute the probability that one model has a better predictive performance than the other. Modeling this uncertainty well is not trivial, and for example, it is known that the commonly used standard error estimate is often too small. We study the properties of the Bayesian LOO-CV estimator and the related uncertainty estimates when comparing two models. We provide new results of the properties both theoretically in the linear regression case and empirically for multiple different models and discuss the challenges of modeling the uncertainty. We show that problematic cases include: comparing models with similar predictions, misspecified models, and small data. In these cases, there is a weak connection in the skewness of the individual leave-one-out terms and the distribution of the error of the Bayesian LOO-CV estimator. We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations. Based on the results, we also provide practical recommendations for the users of Bayesian LOO-CV for model comparison.</description><subject>Bayesian analysis</subject><subject>Estimation</subject><subject>Frequency analysis</subject><subject>Performance prediction</subject><subject>Skewness</subject><subject>Standard error</subject><subject>Uncertainty</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNirEKwjAUAIMgWLT_EHAO1KRtdLUoDhYX7Voe9gkpNal5qdC_t4If4HDccDdjkVRqI7aplAsWE7VJkshcyyxTEStv9o4-gLFh5MbyPYxIBiw_I7xRXOzEEHjhHZGooDMNBOO-H2HDS9dgxwv37MEbcnbF5g_oCOOfl2x9PFyLk-i9ew1IoW7d4O2UapmqfKe1Vqn67_oAXRg9gQ</recordid><startdate>20231021</startdate><enddate>20231021</enddate><creator>Sivula, Tuomas</creator><creator>Magnusson, Måns</creator><creator>Asael Alonzo Matamoros</creator><creator>Vehtari, Aki</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231021</creationdate><title>Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison</title><author>Sivula, Tuomas ; Magnusson, Måns ; Asael Alonzo Matamoros ; Vehtari, Aki</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24369777343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Bayesian analysis</topic><topic>Estimation</topic><topic>Frequency analysis</topic><topic>Performance prediction</topic><topic>Skewness</topic><topic>Standard error</topic><topic>Uncertainty</topic><toplevel>online_resources</toplevel><creatorcontrib>Sivula, Tuomas</creatorcontrib><creatorcontrib>Magnusson, Måns</creatorcontrib><creatorcontrib>Asael Alonzo Matamoros</creatorcontrib><creatorcontrib>Vehtari, Aki</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sivula, Tuomas</au><au>Magnusson, Måns</au><au>Asael Alonzo Matamoros</au><au>Vehtari, Aki</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison</atitle><jtitle>arXiv.org</jtitle><date>2023-10-21</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Leave-one-out cross-validation (LOO-CV) is a popular method for comparing Bayesian models based on their estimated predictive performance on new, unseen, data. As leave-one-out cross-validation is based on finite observed data, there is uncertainty about the expected predictive performance on new data. By modeling this uncertainty when comparing two models, we can compute the probability that one model has a better predictive performance than the other. Modeling this uncertainty well is not trivial, and for example, it is known that the commonly used standard error estimate is often too small. We study the properties of the Bayesian LOO-CV estimator and the related uncertainty estimates when comparing two models. We provide new results of the properties both theoretically in the linear regression case and empirically for multiple different models and discuss the challenges of modeling the uncertainty. We show that problematic cases include: comparing models with similar predictions, misspecified models, and small data. In these cases, there is a weak connection in the skewness of the individual leave-one-out terms and the distribution of the error of the Bayesian LOO-CV estimator. We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations. Based on the results, we also provide practical recommendations for the users of Bayesian LOO-CV for model comparison.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2436977734
source Publicly Available Content (ProQuest)
subjects Bayesian analysis
Estimation
Frequency analysis
Performance prediction
Skewness
Standard error
Uncertainty
title Uncertainty in Bayesian Leave-One-Out Cross-Validation Based Model Comparison
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T22%3A33%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Uncertainty%20in%20Bayesian%20Leave-One-Out%20Cross-Validation%20Based%20Model%20Comparison&rft.jtitle=arXiv.org&rft.au=Sivula,%20Tuomas&rft.date=2023-10-21&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2436977734%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_24369777343%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2436977734&rft_id=info:pmid/&rfr_iscdi=true