Loading…

An analysis of training and generalization errors in shallow and deep networks

This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks 2020-01, Vol.121, p.229-241
Main Authors: Mhaskar, H.N., Poggio, T.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3
cites cdi_FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3
container_end_page 241
container_issue
container_start_page 229
container_title Neural networks
container_volume 121
creator Mhaskar, H.N.
Poggio, T.
description This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.
doi_str_mv 10.1016/j.neunet.2019.08.028
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2300184630</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608019302552</els_id><sourcerecordid>2300184630</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3</originalsourceid><addsrcrecordid>eNp9kMlOwzAQQC0EgrL8AUI5ckkYL3GcCxJCbBKCC5wt156AS2oXOwWVrydQ4MhppJk32yPkkEJFgcqTWRVwGXCoGNC2AlUBUxtkQlXTlqxRbJNMQLW8lKBgh-zmPAMAqQTfJjuc1o0QlE_I3VkoTDD9KvtcxK4YkvHBh6cx6YonDJhM7z_M4GMoMKWYcuFDkZ9N38f3b8ghLorxjveYXvI-2epMn_HgJ-6Rx8uLh_Pr8vb-6ub87La0AtRQ1goNtGCksRyxRlQNq2vVOpSopAPJHWcKO6us7Ropps0UWtsI1gnmYCzukeP13EWKr0vMg577bLHvTcC4zJpxAKqE5DCiYo3aFHNO2OlF8nOTVpqC_jKpZ3ptUn-Z1KD0aHJsO_rZsJzO0f01_aobgdM1gOOfbx6TztZjsOh8QjtoF_3_Gz4B5u-HYw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2300184630</pqid></control><display><type>article</type><title>An analysis of training and generalization errors in shallow and deep networks</title><source>ScienceDirect Freedom Collection</source><creator>Mhaskar, H.N. ; Poggio, T.</creator><creatorcontrib>Mhaskar, H.N. ; Poggio, T.</creatorcontrib><description>This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2019.08.028</identifier><identifier>PMID: 31574413</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Deep learning ; Generalization error ; Humans ; Interpolatory approximation ; Machine Learning ; Neural Networks, Computer</subject><ispartof>Neural networks, 2020-01, Vol.121, p.229-241</ispartof><rights>2019 Elsevier Ltd</rights><rights>Copyright © 2019 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3</citedby><cites>FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/31574413$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Mhaskar, H.N.</creatorcontrib><creatorcontrib>Poggio, T.</creatorcontrib><title>An analysis of training and generalization errors in shallow and deep networks</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.</description><subject>Deep learning</subject><subject>Generalization error</subject><subject>Humans</subject><subject>Interpolatory approximation</subject><subject>Machine Learning</subject><subject>Neural Networks, Computer</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kMlOwzAQQC0EgrL8AUI5ckkYL3GcCxJCbBKCC5wt156AS2oXOwWVrydQ4MhppJk32yPkkEJFgcqTWRVwGXCoGNC2AlUBUxtkQlXTlqxRbJNMQLW8lKBgh-zmPAMAqQTfJjuc1o0QlE_I3VkoTDD9KvtcxK4YkvHBh6cx6YonDJhM7z_M4GMoMKWYcuFDkZ9N38f3b8ghLorxjveYXvI-2epMn_HgJ-6Rx8uLh_Pr8vb-6ub87La0AtRQ1goNtGCksRyxRlQNq2vVOpSopAPJHWcKO6us7Ropps0UWtsI1gnmYCzukeP13EWKr0vMg577bLHvTcC4zJpxAKqE5DCiYo3aFHNO2OlF8nOTVpqC_jKpZ3ptUn-Z1KD0aHJsO_rZsJzO0f01_aobgdM1gOOfbx6TztZjsOh8QjtoF_3_Gz4B5u-HYw</recordid><startdate>202001</startdate><enddate>202001</enddate><creator>Mhaskar, H.N.</creator><creator>Poggio, T.</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope></search><sort><creationdate>202001</creationdate><title>An analysis of training and generalization errors in shallow and deep networks</title><author>Mhaskar, H.N. ; Poggio, T.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Deep learning</topic><topic>Generalization error</topic><topic>Humans</topic><topic>Interpolatory approximation</topic><topic>Machine Learning</topic><topic>Neural Networks, Computer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Mhaskar, H.N.</creatorcontrib><creatorcontrib>Poggio, T.</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Mhaskar, H.N.</au><au>Poggio, T.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An analysis of training and generalization errors in shallow and deep networks</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2020-01</date><risdate>2020</risdate><volume>121</volume><spage>229</spage><epage>241</epage><pages>229-241</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>31574413</pmid><doi>10.1016/j.neunet.2019.08.028</doi><tpages>13</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2020-01, Vol.121, p.229-241
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2300184630
source ScienceDirect Freedom Collection
subjects Deep learning
Generalization error
Humans
Interpolatory approximation
Machine Learning
Neural Networks, Computer
title An analysis of training and generalization errors in shallow and deep networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T03%3A51%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20analysis%20of%20training%20and%20generalization%20errors%20in%20shallow%20and%20deep%20networks&rft.jtitle=Neural%20networks&rft.au=Mhaskar,%20H.N.&rft.date=2020-01&rft.volume=121&rft.spage=229&rft.epage=241&rft.pages=229-241&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2019.08.028&rft_dat=%3Cproquest_cross%3E2300184630%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c408t-58ea090a6ac3ee5ee8725589de6e86d063d328efc8ccf764b7b09c742f42d03d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2300184630&rft_id=info:pmid/31574413&rfr_iscdi=true