Loading…
Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations
Rectified Linear Units (ReLU) have become the main model for the neural units in current deep learning systems. This choice has been originally suggested as a way to compensate for the so called vanishing gradient problem which can undercut stochastic gradient descent (SGD) learning in networks comp...
Saved in:
Published in: | arXiv.org 2024-05 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Baldassi, Carlo Malatesta, Enrico M Zecchina, Riccardo |
description | Rectified Linear Units (ReLU) have become the main model for the neural units in current deep learning systems. This choice has been originally suggested as a way to compensate for the so called vanishing gradient problem which can undercut stochastic gradient descent (SGD) learning in networks composed of multiple layers. Here we provide analytical results on the effects of ReLUs on the capacity and on the geometrical landscape of the solution space in two-layer neural networks with either binary or real-valued weights. We study the problem of storing an extensive number of random patterns and find that, quite unexpectedly, the capacity of the network remains finite as the number of neurons in the hidden layer increases, at odds with the case of threshold units in which the capacity diverges. Possibly more important, a large deviation approach allows us to find that the geometrical landscape of the solution space has a peculiar structure: while the majority of solutions are close in distance but still isolated, there exist rare regions of solutions which are much more dense than the similar ones in the case of threshold units. These solutions are robust to perturbations of the weights and can tolerate large perturbations of the inputs. The analytical results are corroborated by numerical findings. |
doi_str_mv | 10.48550/arxiv.1907.07578 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2260226609</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2260226609</sourcerecordid><originalsourceid>FETCH-LOGICAL-a529-f83d2e178706935b5fc01c880d924148858e0dd127c6050e0bdf016389f3942f3</originalsourceid><addsrcrecordid>eNotjU1LAzEYhIMgWGp_gLeA561vks0me5TiFxQUqeeSbt7Y1O2mJtnW4p93rR6GYWbgGUKuGExLLSXcmPjl91NWg5qCkkqfkREXghW65PyCTFLaAACvFJdSjMj3Sww7jNljosHRvEb6jmGLOR5_cwptn33oEjWdpY3Zmcbn07Lt2-yL1hwx0g77aNrB8iHEj0QPPq_pKzbZO4-Wzn2HJtK3zueBM7R7c2JeknNn2oSTfx-Txf3dYvZYzJ8fnma388JIXhdOC8uRKa2gqoVcSdcAa7QGW_OSlVpLjWAt46qpQALCyjpgldC1E3XJnRiT6z_sLobPHlNebkIfu-FxyXkFgyqoxQ_BzmBt</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2260226609</pqid></control><display><type>article</type><title>Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations</title><source>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</source><creator>Baldassi, Carlo ; Malatesta, Enrico M ; Zecchina, Riccardo</creator><creatorcontrib>Baldassi, Carlo ; Malatesta, Enrico M ; Zecchina, Riccardo</creatorcontrib><description>Rectified Linear Units (ReLU) have become the main model for the neural units in current deep learning systems. This choice has been originally suggested as a way to compensate for the so called vanishing gradient problem which can undercut stochastic gradient descent (SGD) learning in networks composed of multiple layers. Here we provide analytical results on the effects of ReLUs on the capacity and on the geometrical landscape of the solution space in two-layer neural networks with either binary or real-valued weights. We study the problem of storing an extensive number of random patterns and find that, quite unexpectedly, the capacity of the network remains finite as the number of neurons in the hidden layer increases, at odds with the case of threshold units in which the capacity diverges. Possibly more important, a large deviation approach allows us to find that the geometrical landscape of the solution space has a peculiar structure: while the majority of solutions are close in distance but still isolated, there exist rare regions of solutions which are much more dense than the similar ones in the case of threshold units. These solutions are robust to perturbations of the weights and can tolerate large perturbations of the inputs. The analytical results are corroborated by numerical findings.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1907.07578</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Landscape ; Machine learning ; Multilayers ; Neural networks ; Robustness (mathematics) ; Solution space</subject><ispartof>arXiv.org, 2024-05</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2260226609?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,27924,37011,44589</link.rule.ids></links><search><creatorcontrib>Baldassi, Carlo</creatorcontrib><creatorcontrib>Malatesta, Enrico M</creatorcontrib><creatorcontrib>Zecchina, Riccardo</creatorcontrib><title>Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations</title><title>arXiv.org</title><description>Rectified Linear Units (ReLU) have become the main model for the neural units in current deep learning systems. This choice has been originally suggested as a way to compensate for the so called vanishing gradient problem which can undercut stochastic gradient descent (SGD) learning in networks composed of multiple layers. Here we provide analytical results on the effects of ReLUs on the capacity and on the geometrical landscape of the solution space in two-layer neural networks with either binary or real-valued weights. We study the problem of storing an extensive number of random patterns and find that, quite unexpectedly, the capacity of the network remains finite as the number of neurons in the hidden layer increases, at odds with the case of threshold units in which the capacity diverges. Possibly more important, a large deviation approach allows us to find that the geometrical landscape of the solution space has a peculiar structure: while the majority of solutions are close in distance but still isolated, there exist rare regions of solutions which are much more dense than the similar ones in the case of threshold units. These solutions are robust to perturbations of the weights and can tolerate large perturbations of the inputs. The analytical results are corroborated by numerical findings.</description><subject>Landscape</subject><subject>Machine learning</subject><subject>Multilayers</subject><subject>Neural networks</subject><subject>Robustness (mathematics)</subject><subject>Solution space</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjU1LAzEYhIMgWGp_gLeA561vks0me5TiFxQUqeeSbt7Y1O2mJtnW4p93rR6GYWbgGUKuGExLLSXcmPjl91NWg5qCkkqfkREXghW65PyCTFLaAACvFJdSjMj3Sww7jNljosHRvEb6jmGLOR5_cwptn33oEjWdpY3Zmcbn07Lt2-yL1hwx0g77aNrB8iHEj0QPPq_pKzbZO4-Wzn2HJtK3zueBM7R7c2JeknNn2oSTfx-Txf3dYvZYzJ8fnma388JIXhdOC8uRKa2gqoVcSdcAa7QGW_OSlVpLjWAt46qpQALCyjpgldC1E3XJnRiT6z_sLobPHlNebkIfu-FxyXkFgyqoxQ_BzmBt</recordid><startdate>20240503</startdate><enddate>20240503</enddate><creator>Baldassi, Carlo</creator><creator>Malatesta, Enrico M</creator><creator>Zecchina, Riccardo</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240503</creationdate><title>Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations</title><author>Baldassi, Carlo ; Malatesta, Enrico M ; Zecchina, Riccardo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a529-f83d2e178706935b5fc01c880d924148858e0dd127c6050e0bdf016389f3942f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Landscape</topic><topic>Machine learning</topic><topic>Multilayers</topic><topic>Neural networks</topic><topic>Robustness (mathematics)</topic><topic>Solution space</topic><toplevel>online_resources</toplevel><creatorcontrib>Baldassi, Carlo</creatorcontrib><creatorcontrib>Malatesta, Enrico M</creatorcontrib><creatorcontrib>Zecchina, Riccardo</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Baldassi, Carlo</au><au>Malatesta, Enrico M</au><au>Zecchina, Riccardo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations</atitle><jtitle>arXiv.org</jtitle><date>2024-05-03</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Rectified Linear Units (ReLU) have become the main model for the neural units in current deep learning systems. This choice has been originally suggested as a way to compensate for the so called vanishing gradient problem which can undercut stochastic gradient descent (SGD) learning in networks composed of multiple layers. Here we provide analytical results on the effects of ReLUs on the capacity and on the geometrical landscape of the solution space in two-layer neural networks with either binary or real-valued weights. We study the problem of storing an extensive number of random patterns and find that, quite unexpectedly, the capacity of the network remains finite as the number of neurons in the hidden layer increases, at odds with the case of threshold units in which the capacity diverges. Possibly more important, a large deviation approach allows us to find that the geometrical landscape of the solution space has a peculiar structure: while the majority of solutions are close in distance but still isolated, there exist rare regions of solutions which are much more dense than the similar ones in the case of threshold units. These solutions are robust to perturbations of the weights and can tolerate large perturbations of the inputs. The analytical results are corroborated by numerical findings.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1907.07578</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-05 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2260226609 |
source | Publicly Available Content Database (Proquest) (PQ_SDU_P3) |
subjects | Landscape Machine learning Multilayers Neural networks Robustness (mathematics) Solution space |
title | Properties of the geometry of solutions and capacity of multi-layer neural networks with Rectified Linear Units activations |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T10%3A48%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Properties%20of%20the%20geometry%20of%20solutions%20and%20capacity%20of%20multi-layer%20neural%20networks%20with%20Rectified%20Linear%20Units%20activations&rft.jtitle=arXiv.org&rft.au=Baldassi,%20Carlo&rft.date=2024-05-03&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1907.07578&rft_dat=%3Cproquest%3E2260226609%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a529-f83d2e178706935b5fc01c880d924148858e0dd127c6050e0bdf016389f3942f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2260226609&rft_id=info:pmid/&rfr_iscdi=true |