Loading…

Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry

We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a standardized parameterization in which all symmetries...

Full description

Saved in:
Bibliographic Details
Published in:Journal of statistical mechanics 2022-11, Vol.2022 (11), p.114007
Main Authors: Pittorino, Fabrizio, Ferraro, Antonio, Perugini, Gabriele, Feinauer, Christoph, Baldassi, Carlo, Zecchina, Riccardo
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3
cites cdi_FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3
container_end_page
container_issue 11
container_start_page 114007
container_title Journal of statistical mechanics
container_volume 2022
creator Pittorino, Fabrizio
Ferraro, Antonio
Perugini, Gabriele
Feinauer, Christoph
Baldassi, Carlo
Zecchina, Riccardo
description We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a standardized parameterization in which all symmetries are removed, resulting in a toroidal topology. On this space, we explore the error landscape rather than the loss. This lets us derive a meaningful notion of the flatness of minimizers and of the geodesic paths connecting them. Using different optimization algorithms that sample minimizers with different flatness we study the mode connectivity and relative distances. Testing a variety of state-of-the-art architectures and benchmark datasets, we confirm the correlation between flatness and generalization performance; we further show that in function space flatter minima are closer to each other and that the barriers along the geodesics connecting them are small. We also find that minimizers found by variants of gradient descent can be connected by zero-error paths composed of two straight lines in parameter space, i.e. polygonal chains with a single bend. We observe similar qualitative results in neural networks with binary weights and activations, providing one of the first results concerning the connectivity in this setting. Our results hinge on symmetry removal, and are in remarkable agreement with the rich phenomenology described by some recent analytical studies performed on simple shallow models.
doi_str_mv 10.1088/1742-5468/ac9832
format article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1088_1742_5468_ac9832</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1088_1742_5468_ac9832</sourcerecordid><originalsourceid>FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3</originalsourceid><addsrcrecordid>eNpNkDFPwzAUhC0EEqWwM_oPhNqJ4zhsqEBBqsQCs-XYz8GQxJXtFuXfk1CEmN7p7t4NH0LXlNxQIsSKVizPSsbFSulaFPkJWvxZp__0ObqI8YOQIidMLFB_D7DDA6QvHz4j9gNOPnhn4i0O0PuDG1ocx76HFBzEyTuA6iJO74BjCnud9gGwt9h2Kk1p6_wQsRt-Cp0aTNRqB7gFPy-Ml-jMTu9w9XuX6O3x4XX9lG1fNs_ru22mc1akjDekZg3lpTa0akTJBTXAWW1pyXRjDSu4VYwQWhmroWRCMK3ymlKiK2WYKZaIHHd18DEGsHIXXK_CKCmRMy4585AzD3nEVXwD0q5gcg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry</title><source>Institute of Physics</source><creator>Pittorino, Fabrizio ; Ferraro, Antonio ; Perugini, Gabriele ; Feinauer, Christoph ; Baldassi, Carlo ; Zecchina, Riccardo</creator><creatorcontrib>Pittorino, Fabrizio ; Ferraro, Antonio ; Perugini, Gabriele ; Feinauer, Christoph ; Baldassi, Carlo ; Zecchina, Riccardo</creatorcontrib><description>We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a standardized parameterization in which all symmetries are removed, resulting in a toroidal topology. On this space, we explore the error landscape rather than the loss. This lets us derive a meaningful notion of the flatness of minimizers and of the geodesic paths connecting them. Using different optimization algorithms that sample minimizers with different flatness we study the mode connectivity and relative distances. Testing a variety of state-of-the-art architectures and benchmark datasets, we confirm the correlation between flatness and generalization performance; we further show that in function space flatter minima are closer to each other and that the barriers along the geodesics connecting them are small. We also find that minimizers found by variants of gradient descent can be connected by zero-error paths composed of two straight lines in parameter space, i.e. polygonal chains with a single bend. We observe similar qualitative results in neural networks with binary weights and activations, providing one of the first results concerning the connectivity in this setting. Our results hinge on symmetry removal, and are in remarkable agreement with the rich phenomenology described by some recent analytical studies performed on simple shallow models.</description><identifier>ISSN: 1742-5468</identifier><identifier>EISSN: 1742-5468</identifier><identifier>DOI: 10.1088/1742-5468/ac9832</identifier><language>eng</language><ispartof>Journal of statistical mechanics, 2022-11, Vol.2022 (11), p.114007</ispartof><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3</citedby><cites>FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Pittorino, Fabrizio</creatorcontrib><creatorcontrib>Ferraro, Antonio</creatorcontrib><creatorcontrib>Perugini, Gabriele</creatorcontrib><creatorcontrib>Feinauer, Christoph</creatorcontrib><creatorcontrib>Baldassi, Carlo</creatorcontrib><creatorcontrib>Zecchina, Riccardo</creatorcontrib><title>Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry</title><title>Journal of statistical mechanics</title><description>We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a standardized parameterization in which all symmetries are removed, resulting in a toroidal topology. On this space, we explore the error landscape rather than the loss. This lets us derive a meaningful notion of the flatness of minimizers and of the geodesic paths connecting them. Using different optimization algorithms that sample minimizers with different flatness we study the mode connectivity and relative distances. Testing a variety of state-of-the-art architectures and benchmark datasets, we confirm the correlation between flatness and generalization performance; we further show that in function space flatter minima are closer to each other and that the barriers along the geodesics connecting them are small. We also find that minimizers found by variants of gradient descent can be connected by zero-error paths composed of two straight lines in parameter space, i.e. polygonal chains with a single bend. We observe similar qualitative results in neural networks with binary weights and activations, providing one of the first results concerning the connectivity in this setting. Our results hinge on symmetry removal, and are in remarkable agreement with the rich phenomenology described by some recent analytical studies performed on simple shallow models.</description><issn>1742-5468</issn><issn>1742-5468</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNpNkDFPwzAUhC0EEqWwM_oPhNqJ4zhsqEBBqsQCs-XYz8GQxJXtFuXfk1CEmN7p7t4NH0LXlNxQIsSKVizPSsbFSulaFPkJWvxZp__0ObqI8YOQIidMLFB_D7DDA6QvHz4j9gNOPnhn4i0O0PuDG1ocx76HFBzEyTuA6iJO74BjCnud9gGwt9h2Kk1p6_wQsRt-Cp0aTNRqB7gFPy-Ml-jMTu9w9XuX6O3x4XX9lG1fNs_ru22mc1akjDekZg3lpTa0akTJBTXAWW1pyXRjDSu4VYwQWhmroWRCMK3ymlKiK2WYKZaIHHd18DEGsHIXXK_CKCmRMy4585AzD3nEVXwD0q5gcg</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Pittorino, Fabrizio</creator><creator>Ferraro, Antonio</creator><creator>Perugini, Gabriele</creator><creator>Feinauer, Christoph</creator><creator>Baldassi, Carlo</creator><creator>Zecchina, Riccardo</creator><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20221101</creationdate><title>Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry</title><author>Pittorino, Fabrizio ; Ferraro, Antonio ; Perugini, Gabriele ; Feinauer, Christoph ; Baldassi, Carlo ; Zecchina, Riccardo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pittorino, Fabrizio</creatorcontrib><creatorcontrib>Ferraro, Antonio</creatorcontrib><creatorcontrib>Perugini, Gabriele</creatorcontrib><creatorcontrib>Feinauer, Christoph</creatorcontrib><creatorcontrib>Baldassi, Carlo</creatorcontrib><creatorcontrib>Zecchina, Riccardo</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of statistical mechanics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pittorino, Fabrizio</au><au>Ferraro, Antonio</au><au>Perugini, Gabriele</au><au>Feinauer, Christoph</au><au>Baldassi, Carlo</au><au>Zecchina, Riccardo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry</atitle><jtitle>Journal of statistical mechanics</jtitle><date>2022-11-01</date><risdate>2022</risdate><volume>2022</volume><issue>11</issue><spage>114007</spage><pages>114007-</pages><issn>1742-5468</issn><eissn>1742-5468</eissn><abstract>We systematize the approach to the investigation of deep neural network landscapes by basing it on the geometry of the space of implemented functions rather than the space of parameters. Grouping classifiers into equivalence classes, we develop a standardized parameterization in which all symmetries are removed, resulting in a toroidal topology. On this space, we explore the error landscape rather than the loss. This lets us derive a meaningful notion of the flatness of minimizers and of the geodesic paths connecting them. Using different optimization algorithms that sample minimizers with different flatness we study the mode connectivity and relative distances. Testing a variety of state-of-the-art architectures and benchmark datasets, we confirm the correlation between flatness and generalization performance; we further show that in function space flatter minima are closer to each other and that the barriers along the geodesics connecting them are small. We also find that minimizers found by variants of gradient descent can be connected by zero-error paths composed of two straight lines in parameter space, i.e. polygonal chains with a single bend. We observe similar qualitative results in neural networks with binary weights and activations, providing one of the first results concerning the connectivity in this setting. Our results hinge on symmetry removal, and are in remarkable agreement with the rich phenomenology described by some recent analytical studies performed on simple shallow models.</abstract><doi>10.1088/1742-5468/ac9832</doi></addata></record>
fulltext fulltext
identifier ISSN: 1742-5468
ispartof Journal of statistical mechanics, 2022-11, Vol.2022 (11), p.114007
issn 1742-5468
1742-5468
language eng
recordid cdi_crossref_primary_10_1088_1742_5468_ac9832
source Institute of Physics
title Deep networks on toroids: removing symmetries reveals the structure of flat regions in the landscape geometry
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T04%3A33%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20networks%20on%20toroids:%20removing%20symmetries%20reveals%20the%20structure%20of%20flat%20regions%20in%20the%20landscape%20geometry&rft.jtitle=Journal%20of%20statistical%20mechanics&rft.au=Pittorino,%20Fabrizio&rft.date=2022-11-01&rft.volume=2022&rft.issue=11&rft.spage=114007&rft.pages=114007-&rft.issn=1742-5468&rft.eissn=1742-5468&rft_id=info:doi/10.1088/1742-5468/ac9832&rft_dat=%3Ccrossref%3E10_1088_1742_5468_ac9832%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c243t-6b094b165cd17b85681de649f154cbfd436fa40017dfce54884ca29110c7ad4d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true