Loading…

An improved weight-constrained neural network training algorithm

In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive pro...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications 2020-05, Vol.32 (9), p.4177-4185
Main Authors: Livieris, Ioannis E., Pintelas, Panagiotis
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843
cites cdi_FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843
container_end_page 4185
container_issue 9
container_start_page 4177
container_title Neural computing & applications
container_volume 32
creator Livieris, Ioannis E.
Pintelas, Panagiotis
description In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive property of iWCNN is that it utilizes a new scaling factor for defining the initial Hessian approximation used in the L-BFGS formula. Since the L-BFGS Hessian approximation is defined utilizing a small number of correction vector pairs, our motivation is to further exploit them in order to increase the efficiency of the training algorithm and the convergence rate of the minimization process. The preliminary numerical experiments provide empirical evidence that the proposed training algorithm accelerates the training process.
doi_str_mv 10.1007/s00521-019-04342-2
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2391791846</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2391791846</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843</originalsourceid><addsrcrecordid>eNp9kE9LAzEQxYMoWKtfwNOC5-jkz26Sm6VoFQpe9BySbLrd2mZrsmvx2xu7gjdPA2_eezP8ELomcEsAxF0CKCnBQBQGzjjF9ARNCGcMMyjlKZqA4nldcXaOLlLaAACvZDlB97NQtLt97D59XRx826x77LqQ-mjakKXgh2i2efSHLr4XR7kNTWG2TRfbfr27RGcrs03-6ndO0dvjw-v8CS9fFs_z2RI7RlSPRUllrUphGVfe29JQCZaActJ4ZZzyNVAwtXWKrISonBPgnbW0BuEtlZxN0c3Ym3_9GHzq9aYbYsgnNWWKCEUkr7KLji4Xu5SiX-l9bHcmfmkC-oeUHknpTEofSeX0FLExlLI5ND7-Vf-T-gb3DGxV</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2391791846</pqid></control><display><type>article</type><title>An improved weight-constrained neural network training algorithm</title><source>Springer Nature</source><creator>Livieris, Ioannis E. ; Pintelas, Panagiotis</creator><creatorcontrib>Livieris, Ioannis E. ; Pintelas, Panagiotis</creatorcontrib><description>In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive property of iWCNN is that it utilizes a new scaling factor for defining the initial Hessian approximation used in the L-BFGS formula. Since the L-BFGS Hessian approximation is defined utilizing a small number of correction vector pairs, our motivation is to further exploit them in order to increase the efficiency of the training algorithm and the convergence rate of the minimization process. The preliminary numerical experiments provide empirical evidence that the proposed training algorithm accelerates the training process.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-019-04342-2</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Algorithms ; Approximation ; Artificial Intelligence ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Emerging Trends of Applied Neural Computation - E_TRAINCO ; Image Processing and Computer Vision ; Mathematical analysis ; Neural networks ; Probability and Statistics in Computer Science ; Scaling factors ; Weight</subject><ispartof>Neural computing &amp; applications, 2020-05, Vol.32 (9), p.4177-4185</ispartof><rights>Springer-Verlag London Ltd., part of Springer Nature 2019</rights><rights>Springer-Verlag London Ltd., part of Springer Nature 2019.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843</citedby><cites>FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843</cites><orcidid>0000-0002-3996-3301</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Livieris, Ioannis E.</creatorcontrib><creatorcontrib>Pintelas, Panagiotis</creatorcontrib><title>An improved weight-constrained neural network training algorithm</title><title>Neural computing &amp; applications</title><addtitle>Neural Comput &amp; Applic</addtitle><description>In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive property of iWCNN is that it utilizes a new scaling factor for defining the initial Hessian approximation used in the L-BFGS formula. Since the L-BFGS Hessian approximation is defined utilizing a small number of correction vector pairs, our motivation is to further exploit them in order to increase the efficiency of the training algorithm and the convergence rate of the minimization process. The preliminary numerical experiments provide empirical evidence that the proposed training algorithm accelerates the training process.</description><subject>Algorithms</subject><subject>Approximation</subject><subject>Artificial Intelligence</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Emerging Trends of Applied Neural Computation - E_TRAINCO</subject><subject>Image Processing and Computer Vision</subject><subject>Mathematical analysis</subject><subject>Neural networks</subject><subject>Probability and Statistics in Computer Science</subject><subject>Scaling factors</subject><subject>Weight</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LAzEQxYMoWKtfwNOC5-jkz26Sm6VoFQpe9BySbLrd2mZrsmvx2xu7gjdPA2_eezP8ELomcEsAxF0CKCnBQBQGzjjF9ARNCGcMMyjlKZqA4nldcXaOLlLaAACvZDlB97NQtLt97D59XRx826x77LqQ-mjakKXgh2i2efSHLr4XR7kNTWG2TRfbfr27RGcrs03-6ndO0dvjw-v8CS9fFs_z2RI7RlSPRUllrUphGVfe29JQCZaActJ4ZZzyNVAwtXWKrISonBPgnbW0BuEtlZxN0c3Ym3_9GHzq9aYbYsgnNWWKCEUkr7KLji4Xu5SiX-l9bHcmfmkC-oeUHknpTEofSeX0FLExlLI5ND7-Vf-T-gb3DGxV</recordid><startdate>20200501</startdate><enddate>20200501</enddate><creator>Livieris, Ioannis E.</creator><creator>Pintelas, Panagiotis</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0002-3996-3301</orcidid></search><sort><creationdate>20200501</creationdate><title>An improved weight-constrained neural network training algorithm</title><author>Livieris, Ioannis E. ; Pintelas, Panagiotis</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Algorithms</topic><topic>Approximation</topic><topic>Artificial Intelligence</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Emerging Trends of Applied Neural Computation - E_TRAINCO</topic><topic>Image Processing and Computer Vision</topic><topic>Mathematical analysis</topic><topic>Neural networks</topic><topic>Probability and Statistics in Computer Science</topic><topic>Scaling factors</topic><topic>Weight</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Livieris, Ioannis E.</creatorcontrib><creatorcontrib>Pintelas, Panagiotis</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Neural computing &amp; applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Livieris, Ioannis E.</au><au>Pintelas, Panagiotis</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An improved weight-constrained neural network training algorithm</atitle><jtitle>Neural computing &amp; applications</jtitle><stitle>Neural Comput &amp; Applic</stitle><date>2020-05-01</date><risdate>2020</risdate><volume>32</volume><issue>9</issue><spage>4177</spage><epage>4185</epage><pages>4177-4185</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>In this work, we propose an improved weight-constrained neural network training algorithm, named iWCNN. The proposed algorithm exploits the numerical efficiency of the L-BFGS matrices together with a gradient-projection strategy for handling the bounds on the weights. Additionally, an attractive property of iWCNN is that it utilizes a new scaling factor for defining the initial Hessian approximation used in the L-BFGS formula. Since the L-BFGS Hessian approximation is defined utilizing a small number of correction vector pairs, our motivation is to further exploit them in order to increase the efficiency of the training algorithm and the convergence rate of the minimization process. The preliminary numerical experiments provide empirical evidence that the proposed training algorithm accelerates the training process.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-019-04342-2</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-3996-3301</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0941-0643
ispartof Neural computing & applications, 2020-05, Vol.32 (9), p.4177-4185
issn 0941-0643
1433-3058
language eng
recordid cdi_proquest_journals_2391791846
source Springer Nature
subjects Algorithms
Approximation
Artificial Intelligence
Computational Biology/Bioinformatics
Computational Science and Engineering
Computer Science
Data Mining and Knowledge Discovery
Emerging Trends of Applied Neural Computation - E_TRAINCO
Image Processing and Computer Vision
Mathematical analysis
Neural networks
Probability and Statistics in Computer Science
Scaling factors
Weight
title An improved weight-constrained neural network training algorithm
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T14%3A24%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20improved%20weight-constrained%20neural%20network%20training%20algorithm&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Livieris,%20Ioannis%20E.&rft.date=2020-05-01&rft.volume=32&rft.issue=9&rft.spage=4177&rft.epage=4185&rft.pages=4177-4185&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-019-04342-2&rft_dat=%3Cproquest_cross%3E2391791846%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-7528d957b349eeb5a280b109c8ae9ac9ed020adbc91f776cc70ecbb2d07eb2843%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2391791846&rft_id=info:pmid/&rfr_iscdi=true