Loading…

Sometimes Size Does Not Matter

Recently Díaz, Hössjer and Marks (DHM) presented a Bayesian framework to measure cosmological tuning (either fine or coarse) that uses maximum entropy (maxent) distributions on unbounded sample spaces as priors for the parameters of the physical models ( https://doi.org/10.1088/1475-7516/2021/07/020...

Full description

Saved in:
Bibliographic Details
Published in:Foundations of physics 2023-02, Vol.53 (1), Article 1
Main Authors: Díaz-Pachón, Daniel Andrés, Hössjer, Ola, Marks, Robert J.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43
cites cdi_FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43
container_end_page
container_issue 1
container_start_page
container_title Foundations of physics
container_volume 53
creator Díaz-Pachón, Daniel Andrés
Hössjer, Ola
Marks, Robert J.
description Recently Díaz, Hössjer and Marks (DHM) presented a Bayesian framework to measure cosmological tuning (either fine or coarse) that uses maximum entropy (maxent) distributions on unbounded sample spaces as priors for the parameters of the physical models ( https://doi.org/10.1088/1475-7516/2021/07/020 ). The DHM framework stands in contrast to previous attempts to measure tuning that rely on a uniform prior assumption. However, since the parameters of the models often take values in spaces of infinite size, the uniformity assumption is unwarranted. This is known as the normalization problem. In this paper we explain why and how the DHM framework not only evades the normalization problem but also circumvents other objections to the tuning measurement like the so called weak anthropic principle, the selection of a single maxent distribution and, importantly, the lack of invariance of maxent distributions with respect to data transformations. We also propose to treat fine-tuning as an emergence problem to avoid infinite loops in the prior distribution of hyperparameters (common to all Bayesian analysis), and explain that previous attempts to measure tuning using uniform priors are particular cases of the DHM framework. Finally, we prove a theorem, explaining when tuning is fine or coarse for different families of distributions. The theorem is summarized in a table for ease of reference, and the tuning of three physical parameters is analyzed using the conclusions of the theorem.
doi_str_mv 10.1007/s10701-022-00650-1
format article
fullrecord <record><control><sourceid>proquest_swepu</sourceid><recordid>TN_cdi_swepub_primary_oai_DiVA_org_su_213533</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2739129437</sourcerecordid><originalsourceid>FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxYMoWKtfwIMseDU6k2ySzbG0_oOqhxavIbtmyxbbrMkuop_e6Ba9eZoZ-L03j0fIKcIlAqiriKAAKTBGAaQAintkhEIxqgXKfTICQEE1YHFIjmJcA4BWMh-Rs4XfuK7ZuJgtmk-XzXzaHn2XPdiuc-GYHNT2NbqT3RyT5c31cnpH50-399PJnFZcyI4KrYqyEGilRlvqHCpb5E6kq0arXlx6pssCMS8LxbWoNKs0WFmVHCSvcz4mF4NtfHdtX5o2NBsbPoy3jZk1zxPjw8rE3jDkgvOEnw94G_xb72Jn1r4P2xTQsOSPTOdcJYoNVBV8jMHVv7YI5rs1M7RmUmvmpzWDScR3SRK8XbnwZ_2P6gtaqmwp</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2739129437</pqid></control><display><type>article</type><title>Sometimes Size Does Not Matter</title><source>Springer Link</source><creator>Díaz-Pachón, Daniel Andrés ; Hössjer, Ola ; Marks, Robert J.</creator><creatorcontrib>Díaz-Pachón, Daniel Andrés ; Hössjer, Ola ; Marks, Robert J.</creatorcontrib><description>Recently Díaz, Hössjer and Marks (DHM) presented a Bayesian framework to measure cosmological tuning (either fine or coarse) that uses maximum entropy (maxent) distributions on unbounded sample spaces as priors for the parameters of the physical models ( https://doi.org/10.1088/1475-7516/2021/07/020 ). The DHM framework stands in contrast to previous attempts to measure tuning that rely on a uniform prior assumption. However, since the parameters of the models often take values in spaces of infinite size, the uniformity assumption is unwarranted. This is known as the normalization problem. In this paper we explain why and how the DHM framework not only evades the normalization problem but also circumvents other objections to the tuning measurement like the so called weak anthropic principle, the selection of a single maxent distribution and, importantly, the lack of invariance of maxent distributions with respect to data transformations. We also propose to treat fine-tuning as an emergence problem to avoid infinite loops in the prior distribution of hyperparameters (common to all Bayesian analysis), and explain that previous attempts to measure tuning using uniform priors are particular cases of the DHM framework. Finally, we prove a theorem, explaining when tuning is fine or coarse for different families of distributions. The theorem is summarized in a table for ease of reference, and the tuning of three physical parameters is analyzed using the conclusions of the theorem.</description><identifier>ISSN: 0015-9018</identifier><identifier>ISSN: 1572-9516</identifier><identifier>EISSN: 1572-9516</identifier><identifier>DOI: 10.1007/s10701-022-00650-1</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Bayesian analysis ; Bayesian statistics ; Classical and Quantum Gravitation ; Classical Mechanics ; Constants of nature ; Emergence ; Fine-tuning ; Fundamental constants ; History and Philosophical Foundations of Physics ; Infinites ; Mathematical models ; Maximum entropy ; Parameters ; Philosophy of Science ; Physical properties ; Physics ; Physics and Astronomy ; Quantum Physics ; Relativity Theory ; Standard models ; Statistical Physics and Dynamical Systems ; Theorems ; Weak anthropic principle</subject><ispartof>Foundations of physics, 2023-02, Vol.53 (1), Article 1</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43</citedby><cites>FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,776,780,881,27903,27904</link.rule.ids><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-213533$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Díaz-Pachón, Daniel Andrés</creatorcontrib><creatorcontrib>Hössjer, Ola</creatorcontrib><creatorcontrib>Marks, Robert J.</creatorcontrib><title>Sometimes Size Does Not Matter</title><title>Foundations of physics</title><addtitle>Found Phys</addtitle><description>Recently Díaz, Hössjer and Marks (DHM) presented a Bayesian framework to measure cosmological tuning (either fine or coarse) that uses maximum entropy (maxent) distributions on unbounded sample spaces as priors for the parameters of the physical models ( https://doi.org/10.1088/1475-7516/2021/07/020 ). The DHM framework stands in contrast to previous attempts to measure tuning that rely on a uniform prior assumption. However, since the parameters of the models often take values in spaces of infinite size, the uniformity assumption is unwarranted. This is known as the normalization problem. In this paper we explain why and how the DHM framework not only evades the normalization problem but also circumvents other objections to the tuning measurement like the so called weak anthropic principle, the selection of a single maxent distribution and, importantly, the lack of invariance of maxent distributions with respect to data transformations. We also propose to treat fine-tuning as an emergence problem to avoid infinite loops in the prior distribution of hyperparameters (common to all Bayesian analysis), and explain that previous attempts to measure tuning using uniform priors are particular cases of the DHM framework. Finally, we prove a theorem, explaining when tuning is fine or coarse for different families of distributions. The theorem is summarized in a table for ease of reference, and the tuning of three physical parameters is analyzed using the conclusions of the theorem.</description><subject>Bayesian analysis</subject><subject>Bayesian statistics</subject><subject>Classical and Quantum Gravitation</subject><subject>Classical Mechanics</subject><subject>Constants of nature</subject><subject>Emergence</subject><subject>Fine-tuning</subject><subject>Fundamental constants</subject><subject>History and Philosophical Foundations of Physics</subject><subject>Infinites</subject><subject>Mathematical models</subject><subject>Maximum entropy</subject><subject>Parameters</subject><subject>Philosophy of Science</subject><subject>Physical properties</subject><subject>Physics</subject><subject>Physics and Astronomy</subject><subject>Quantum Physics</subject><subject>Relativity Theory</subject><subject>Standard models</subject><subject>Statistical Physics and Dynamical Systems</subject><subject>Theorems</subject><subject>Weak anthropic principle</subject><issn>0015-9018</issn><issn>1572-9516</issn><issn>1572-9516</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LAzEQxYMoWKtfwIMseDU6k2ySzbG0_oOqhxavIbtmyxbbrMkuop_e6Ba9eZoZ-L03j0fIKcIlAqiriKAAKTBGAaQAintkhEIxqgXKfTICQEE1YHFIjmJcA4BWMh-Rs4XfuK7ZuJgtmk-XzXzaHn2XPdiuc-GYHNT2NbqT3RyT5c31cnpH50-399PJnFZcyI4KrYqyEGilRlvqHCpb5E6kq0arXlx6pssCMS8LxbWoNKs0WFmVHCSvcz4mF4NtfHdtX5o2NBsbPoy3jZk1zxPjw8rE3jDkgvOEnw94G_xb72Jn1r4P2xTQsOSPTOdcJYoNVBV8jMHVv7YI5rs1M7RmUmvmpzWDScR3SRK8XbnwZ_2P6gtaqmwp</recordid><startdate>20230201</startdate><enddate>20230201</enddate><creator>Díaz-Pachón, Daniel Andrés</creator><creator>Hössjer, Ola</creator><creator>Marks, Robert J.</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>ADTPV</scope><scope>AOWAS</scope><scope>DG7</scope></search><sort><creationdate>20230201</creationdate><title>Sometimes Size Does Not Matter</title><author>Díaz-Pachón, Daniel Andrés ; Hössjer, Ola ; Marks, Robert J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Bayesian analysis</topic><topic>Bayesian statistics</topic><topic>Classical and Quantum Gravitation</topic><topic>Classical Mechanics</topic><topic>Constants of nature</topic><topic>Emergence</topic><topic>Fine-tuning</topic><topic>Fundamental constants</topic><topic>History and Philosophical Foundations of Physics</topic><topic>Infinites</topic><topic>Mathematical models</topic><topic>Maximum entropy</topic><topic>Parameters</topic><topic>Philosophy of Science</topic><topic>Physical properties</topic><topic>Physics</topic><topic>Physics and Astronomy</topic><topic>Quantum Physics</topic><topic>Relativity Theory</topic><topic>Standard models</topic><topic>Statistical Physics and Dynamical Systems</topic><topic>Theorems</topic><topic>Weak anthropic principle</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Díaz-Pachón, Daniel Andrés</creatorcontrib><creatorcontrib>Hössjer, Ola</creatorcontrib><creatorcontrib>Marks, Robert J.</creatorcontrib><collection>CrossRef</collection><collection>SwePub</collection><collection>SwePub Articles</collection><collection>SWEPUB Stockholms universitet</collection><jtitle>Foundations of physics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Díaz-Pachón, Daniel Andrés</au><au>Hössjer, Ola</au><au>Marks, Robert J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Sometimes Size Does Not Matter</atitle><jtitle>Foundations of physics</jtitle><stitle>Found Phys</stitle><date>2023-02-01</date><risdate>2023</risdate><volume>53</volume><issue>1</issue><artnum>1</artnum><issn>0015-9018</issn><issn>1572-9516</issn><eissn>1572-9516</eissn><abstract>Recently Díaz, Hössjer and Marks (DHM) presented a Bayesian framework to measure cosmological tuning (either fine or coarse) that uses maximum entropy (maxent) distributions on unbounded sample spaces as priors for the parameters of the physical models ( https://doi.org/10.1088/1475-7516/2021/07/020 ). The DHM framework stands in contrast to previous attempts to measure tuning that rely on a uniform prior assumption. However, since the parameters of the models often take values in spaces of infinite size, the uniformity assumption is unwarranted. This is known as the normalization problem. In this paper we explain why and how the DHM framework not only evades the normalization problem but also circumvents other objections to the tuning measurement like the so called weak anthropic principle, the selection of a single maxent distribution and, importantly, the lack of invariance of maxent distributions with respect to data transformations. We also propose to treat fine-tuning as an emergence problem to avoid infinite loops in the prior distribution of hyperparameters (common to all Bayesian analysis), and explain that previous attempts to measure tuning using uniform priors are particular cases of the DHM framework. Finally, we prove a theorem, explaining when tuning is fine or coarse for different families of distributions. The theorem is summarized in a table for ease of reference, and the tuning of three physical parameters is analyzed using the conclusions of the theorem.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10701-022-00650-1</doi></addata></record>
fulltext fulltext
identifier ISSN: 0015-9018
ispartof Foundations of physics, 2023-02, Vol.53 (1), Article 1
issn 0015-9018
1572-9516
1572-9516
language eng
recordid cdi_swepub_primary_oai_DiVA_org_su_213533
source Springer Link
subjects Bayesian analysis
Bayesian statistics
Classical and Quantum Gravitation
Classical Mechanics
Constants of nature
Emergence
Fine-tuning
Fundamental constants
History and Philosophical Foundations of Physics
Infinites
Mathematical models
Maximum entropy
Parameters
Philosophy of Science
Physical properties
Physics
Physics and Astronomy
Quantum Physics
Relativity Theory
Standard models
Statistical Physics and Dynamical Systems
Theorems
Weak anthropic principle
title Sometimes Size Does Not Matter
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T13%3A48%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_swepu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Sometimes%20Size%20Does%20Not%20Matter&rft.jtitle=Foundations%20of%20physics&rft.au=D%C3%ADaz-Pach%C3%B3n,%20Daniel%20Andr%C3%A9s&rft.date=2023-02-01&rft.volume=53&rft.issue=1&rft.artnum=1&rft.issn=0015-9018&rft.eissn=1572-9516&rft_id=info:doi/10.1007/s10701-022-00650-1&rft_dat=%3Cproquest_swepu%3E2739129437%3C/proquest_swepu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c356t-5978b851a691ab940ca84e5691f1a7de0979b8114b87395c92c90a6cb3063f43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2739129437&rft_id=info:pmid/&rfr_iscdi=true