Loading…

Hausdorff dimension, heavy tails, and generalization in neural networks

Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge. While modeling the trajectories of SGD via stochastic differential equations (SDE) under heavy-...

Full description

Saved in:
Bibliographic Details
Published in:Journal of statistical mechanics 2021-12, Vol.2021 (12), p.124014
Main Authors: Şimşekli, Umut, Sener, Ozan, Deligiannidis, George, Erdogdu, Murat A
Format: Article
Language:English
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c126t-91f625e0d3d746159971b74f9ef0f82775cdff6f0ab6be6f2256fa95085a57793
container_end_page
container_issue 12
container_start_page 124014
container_title Journal of statistical mechanics
container_volume 2021
creator Şimşekli, Umut
Sener, Ozan
Deligiannidis, George
Erdogdu, Murat A
description Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge. While modeling the trajectories of SGD via stochastic differential equations (SDE) under heavy-tailed gradient noise has recently shed light over several peculiar characteristics of SGD, a rigorous treatment of the generalization properties of such SDEs in a learning theoretical framework is still missing. Aiming to bridge this gap, in this paper, we prove generalization bounds for SGD under the assumption that its trajectories can be well-approximated by a Feller process , which defines a rich class of Markov processes that include several recent SDE representations (both Brownian or heavy-tailed) as its special case. We show that the generalization error can be controlled by the Hausdorff dimension of the trajectories, which is intimately linked to the tail behavior of the driving process. Our results imply that heavier-tailed processes should achieve better generalization; hence, the tail-index of the process can be used as a notion of ‘capacity metric’. We support our theory with experiments on deep neural networks illustrating that the proposed capacity metric accurately estimates the generalization error, and it does not necessarily grow with the number of parameters unlike the existing capacity metrics in the literature.
doi_str_mv 10.1088/1742-5468/ac3ae7
format article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1088_1742_5468_ac3ae7</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1088_1742_5468_ac3ae7</sourcerecordid><originalsourceid>FETCH-LOGICAL-c126t-91f625e0d3d746159971b74f9ef0f82775cdff6f0ab6be6f2256fa95085a57793</originalsourceid><addsrcrecordid>eNpNkEFLAzEUhIMoWKt3j_kBXZtkN8nmKEVboeBFz-HtJk-j26wkW6X-enepiKcZhmFgPkKuObvhrK6XXFeikJWql9CW4PUJmf1Fp__8ObnI-Y2xUrCqnpH1BvbZ9QmRurDzMYc-Luirh88DHSB0eUEhOvrio0_QhW8YxgINkUa_H4NRhq8-vedLcobQZX_1q3PyfH_3tNoU28f1w-p2W7RcqKEwHJWQnrnS6UpxaYzmja7QeGRYC61l6xAVMmhU4xUKIRWCkayWILU25Zyw426b-pyTR_uRwg7SwXJmJxB2emqnp_YIovwBOFBSPg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Hausdorff dimension, heavy tails, and generalization in neural networks</title><source>Institute of Physics:Jisc Collections:IOP Publishing Read and Publish 2024-2025 (Reading List)</source><creator>Şimşekli, Umut ; Sener, Ozan ; Deligiannidis, George ; Erdogdu, Murat A</creator><creatorcontrib>Şimşekli, Umut ; Sener, Ozan ; Deligiannidis, George ; Erdogdu, Murat A</creatorcontrib><description>Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge. While modeling the trajectories of SGD via stochastic differential equations (SDE) under heavy-tailed gradient noise has recently shed light over several peculiar characteristics of SGD, a rigorous treatment of the generalization properties of such SDEs in a learning theoretical framework is still missing. Aiming to bridge this gap, in this paper, we prove generalization bounds for SGD under the assumption that its trajectories can be well-approximated by a Feller process , which defines a rich class of Markov processes that include several recent SDE representations (both Brownian or heavy-tailed) as its special case. We show that the generalization error can be controlled by the Hausdorff dimension of the trajectories, which is intimately linked to the tail behavior of the driving process. Our results imply that heavier-tailed processes should achieve better generalization; hence, the tail-index of the process can be used as a notion of ‘capacity metric’. We support our theory with experiments on deep neural networks illustrating that the proposed capacity metric accurately estimates the generalization error, and it does not necessarily grow with the number of parameters unlike the existing capacity metrics in the literature.</description><identifier>ISSN: 1742-5468</identifier><identifier>EISSN: 1742-5468</identifier><identifier>DOI: 10.1088/1742-5468/ac3ae7</identifier><language>eng</language><ispartof>Journal of statistical mechanics, 2021-12, Vol.2021 (12), p.124014</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c126t-91f625e0d3d746159971b74f9ef0f82775cdff6f0ab6be6f2256fa95085a57793</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Şimşekli, Umut</creatorcontrib><creatorcontrib>Sener, Ozan</creatorcontrib><creatorcontrib>Deligiannidis, George</creatorcontrib><creatorcontrib>Erdogdu, Murat A</creatorcontrib><title>Hausdorff dimension, heavy tails, and generalization in neural networks</title><title>Journal of statistical mechanics</title><description>Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge. While modeling the trajectories of SGD via stochastic differential equations (SDE) under heavy-tailed gradient noise has recently shed light over several peculiar characteristics of SGD, a rigorous treatment of the generalization properties of such SDEs in a learning theoretical framework is still missing. Aiming to bridge this gap, in this paper, we prove generalization bounds for SGD under the assumption that its trajectories can be well-approximated by a Feller process , which defines a rich class of Markov processes that include several recent SDE representations (both Brownian or heavy-tailed) as its special case. We show that the generalization error can be controlled by the Hausdorff dimension of the trajectories, which is intimately linked to the tail behavior of the driving process. Our results imply that heavier-tailed processes should achieve better generalization; hence, the tail-index of the process can be used as a notion of ‘capacity metric’. We support our theory with experiments on deep neural networks illustrating that the proposed capacity metric accurately estimates the generalization error, and it does not necessarily grow with the number of parameters unlike the existing capacity metrics in the literature.</description><issn>1742-5468</issn><issn>1742-5468</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpNkEFLAzEUhIMoWKt3j_kBXZtkN8nmKEVboeBFz-HtJk-j26wkW6X-enepiKcZhmFgPkKuObvhrK6XXFeikJWql9CW4PUJmf1Fp__8ObnI-Y2xUrCqnpH1BvbZ9QmRurDzMYc-Luirh88DHSB0eUEhOvrio0_QhW8YxgINkUa_H4NRhq8-vedLcobQZX_1q3PyfH_3tNoU28f1w-p2W7RcqKEwHJWQnrnS6UpxaYzmja7QeGRYC61l6xAVMmhU4xUKIRWCkayWILU25Zyw426b-pyTR_uRwg7SwXJmJxB2emqnp_YIovwBOFBSPg</recordid><startdate>20211201</startdate><enddate>20211201</enddate><creator>Şimşekli, Umut</creator><creator>Sener, Ozan</creator><creator>Deligiannidis, George</creator><creator>Erdogdu, Murat A</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20211201</creationdate><title>Hausdorff dimension, heavy tails, and generalization in neural networks</title><author>Şimşekli, Umut ; Sener, Ozan ; Deligiannidis, George ; Erdogdu, Murat A</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c126t-91f625e0d3d746159971b74f9ef0f82775cdff6f0ab6be6f2256fa95085a57793</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Şimşekli, Umut</creatorcontrib><creatorcontrib>Sener, Ozan</creatorcontrib><creatorcontrib>Deligiannidis, George</creatorcontrib><creatorcontrib>Erdogdu, Murat A</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of statistical mechanics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Şimşekli, Umut</au><au>Sener, Ozan</au><au>Deligiannidis, George</au><au>Erdogdu, Murat A</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hausdorff dimension, heavy tails, and generalization in neural networks</atitle><jtitle>Journal of statistical mechanics</jtitle><date>2021-12-01</date><risdate>2021</risdate><volume>2021</volume><issue>12</issue><spage>124014</spage><pages>124014-</pages><issn>1742-5468</issn><eissn>1742-5468</eissn><abstract>Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge. While modeling the trajectories of SGD via stochastic differential equations (SDE) under heavy-tailed gradient noise has recently shed light over several peculiar characteristics of SGD, a rigorous treatment of the generalization properties of such SDEs in a learning theoretical framework is still missing. Aiming to bridge this gap, in this paper, we prove generalization bounds for SGD under the assumption that its trajectories can be well-approximated by a Feller process , which defines a rich class of Markov processes that include several recent SDE representations (both Brownian or heavy-tailed) as its special case. We show that the generalization error can be controlled by the Hausdorff dimension of the trajectories, which is intimately linked to the tail behavior of the driving process. Our results imply that heavier-tailed processes should achieve better generalization; hence, the tail-index of the process can be used as a notion of ‘capacity metric’. We support our theory with experiments on deep neural networks illustrating that the proposed capacity metric accurately estimates the generalization error, and it does not necessarily grow with the number of parameters unlike the existing capacity metrics in the literature.</abstract><doi>10.1088/1742-5468/ac3ae7</doi></addata></record>
fulltext fulltext
identifier ISSN: 1742-5468
ispartof Journal of statistical mechanics, 2021-12, Vol.2021 (12), p.124014
issn 1742-5468
1742-5468
language eng
recordid cdi_crossref_primary_10_1088_1742_5468_ac3ae7
source Institute of Physics:Jisc Collections:IOP Publishing Read and Publish 2024-2025 (Reading List)
title Hausdorff dimension, heavy tails, and generalization in neural networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T21%3A38%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hausdorff%20dimension,%20heavy%20tails,%20and%20generalization%20in%20neural%20networks&rft.jtitle=Journal%20of%20statistical%20mechanics&rft.au=%C5%9Eim%C5%9Fekli,%20Umut&rft.date=2021-12-01&rft.volume=2021&rft.issue=12&rft.spage=124014&rft.pages=124014-&rft.issn=1742-5468&rft.eissn=1742-5468&rft_id=info:doi/10.1088/1742-5468/ac3ae7&rft_dat=%3Ccrossref%3E10_1088_1742_5468_ac3ae7%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c126t-91f625e0d3d746159971b74f9ef0f82775cdff6f0ab6be6f2256fa95085a57793%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true