Loading…

Preconditioned Stochastic Gradient Descent

Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems,...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2018-05, Vol.29 (5), p.1454-1466
Main Author: Li, Xi-Lin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973
cites cdi_FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973
container_end_page 1466
container_issue 5
container_start_page 1454
container_title IEEE transaction on neural networks and learning systems
container_volume 29
creator Li, Xi-Lin
description Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.
doi_str_mv 10.1109/TNNLS.2017.2672978
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_proquest_miscellaneous_1883180662</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7875097</ieee_id><sourcerecordid>1883180662</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973</originalsourceid><addsrcrecordid>eNpdkMtKAzEUhoMottS-gIIU3IjQenLSyWUpVatQqtAu3IXJZXBKO1OTmYVvb2prF2bzB_KdSz5CLimMKAV1v5zPZ4sRAhUj5AKVkCeki5TjEJmUp8e7-OiQfowrSIdDxsfqnHRQMo6Zol1y9x68rStXNmVdeTdYNLX9zGNT2sE05K70VTN49NGmvCBnRb6Ovn_IHlk-Py0nL8PZ2_R18jAbWpbRZigwB6eUAgqFZxSls-A5OiMVy4w0BWRGISsMgBHcWJDcgZJG5dQ6JViP3O7bbkP91frY6E2Z5q_XeeXrNmoqJaMSOMeE3vxDV3UbqrScRkBFx4xymSjcUzbUMQZf6G0oN3n41hT0zqX-dal3LvXBZSq6PrRuzca7Y8mfuQRc7YHSe398FlJkkD7xA6aqdgA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2029143168</pqid></control><display><type>article</type><title>Preconditioned Stochastic Gradient Descent</title><source>IEEE Xplore (Online service)</source><creator>Li, Xi-Lin</creator><creatorcontrib>Li, Xi-Lin</creatorcontrib><description>Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2017.2672978</identifier><identifier>PMID: 28362591</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Acceleration ; Approximation ; Convergence ; Eigenvalues and eigenfunctions ; Moisture content ; Neural network ; Neural networks ; Newton method ; Newton methods ; nonconvex optimization ; Optimization ; preconditioner ; Recurrent neural networks ; stochastic gradient descent (SGD) ; Stochasticity ; Training</subject><ispartof>IEEE transaction on neural networks and learning systems, 2018-05, Vol.29 (5), p.1454-1466</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2018</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973</citedby><cites>FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973</cites><orcidid>0000-0002-3853-2702</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7875097$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/28362591$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Xi-Lin</creatorcontrib><title>Preconditioned Stochastic Gradient Descent</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.</description><subject>Acceleration</subject><subject>Approximation</subject><subject>Convergence</subject><subject>Eigenvalues and eigenfunctions</subject><subject>Moisture content</subject><subject>Neural network</subject><subject>Neural networks</subject><subject>Newton method</subject><subject>Newton methods</subject><subject>nonconvex optimization</subject><subject>Optimization</subject><subject>preconditioner</subject><subject>Recurrent neural networks</subject><subject>stochastic gradient descent (SGD)</subject><subject>Stochasticity</subject><subject>Training</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><recordid>eNpdkMtKAzEUhoMottS-gIIU3IjQenLSyWUpVatQqtAu3IXJZXBKO1OTmYVvb2prF2bzB_KdSz5CLimMKAV1v5zPZ4sRAhUj5AKVkCeki5TjEJmUp8e7-OiQfowrSIdDxsfqnHRQMo6Zol1y9x68rStXNmVdeTdYNLX9zGNT2sE05K70VTN49NGmvCBnRb6Ovn_IHlk-Py0nL8PZ2_R18jAbWpbRZigwB6eUAgqFZxSls-A5OiMVy4w0BWRGISsMgBHcWJDcgZJG5dQ6JViP3O7bbkP91frY6E2Z5q_XeeXrNmoqJaMSOMeE3vxDV3UbqrScRkBFx4xymSjcUzbUMQZf6G0oN3n41hT0zqX-dal3LvXBZSq6PrRuzca7Y8mfuQRc7YHSe398FlJkkD7xA6aqdgA</recordid><startdate>20180501</startdate><enddate>20180501</enddate><creator>Li, Xi-Lin</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3853-2702</orcidid></search><sort><creationdate>20180501</creationdate><title>Preconditioned Stochastic Gradient Descent</title><author>Li, Xi-Lin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Acceleration</topic><topic>Approximation</topic><topic>Convergence</topic><topic>Eigenvalues and eigenfunctions</topic><topic>Moisture content</topic><topic>Neural network</topic><topic>Neural networks</topic><topic>Newton method</topic><topic>Newton methods</topic><topic>nonconvex optimization</topic><topic>Optimization</topic><topic>preconditioner</topic><topic>Recurrent neural networks</topic><topic>stochastic gradient descent (SGD)</topic><topic>Stochasticity</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Li, Xi-Lin</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Li, Xi-Lin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Preconditioned Stochastic Gradient Descent</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2018-05-01</date><risdate>2018</risdate><volume>29</volume><issue>5</issue><spage>1454</spage><epage>1466</epage><pages>1454-1466</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Stochastic gradient descent (SGD) still is the workhorse for many practical problems. However, it converges slow, and can be difficult to tune. It is possible to precondition SGD to accelerate its convergence remarkably. But many attempts in this direction either aim at solving specialized problems, or result in significantly more complicated methods than SGD. This paper proposes a new method to adaptively estimate a preconditioner, such that the amplitudes of perturbations of preconditioned stochastic gradient match that of the perturbations of parameters to be optimized in a way comparable to Newton method for deterministic optimization. Unlike the preconditioners based on secant equation fitting as done in deterministic quasi-Newton methods, which assume positive definite Hessian and approximate its inverse, the new preconditioner works equally well for both convex and nonconvex optimizations with exact or noisy gradients. When stochastic gradient is used, it can naturally damp the gradient noise to stabilize SGD. Efficient preconditioner estimation methods are developed, and with reasonable simplifications, they are applicable to large-scale problems. Experimental results demonstrate that equipped with the new preconditioner, without any tuning effort, preconditioned SGD can efficiently solve many challenging problems like the training of a deep neural network or a recurrent neural network requiring extremely long-term memories.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>28362591</pmid><doi>10.1109/TNNLS.2017.2672978</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-3853-2702</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2018-05, Vol.29 (5), p.1454-1466
issn 2162-237X
2162-2388
language eng
recordid cdi_proquest_miscellaneous_1883180662
source IEEE Xplore (Online service)
subjects Acceleration
Approximation
Convergence
Eigenvalues and eigenfunctions
Moisture content
Neural network
Neural networks
Newton method
Newton methods
nonconvex optimization
Optimization
preconditioner
Recurrent neural networks
stochastic gradient descent (SGD)
Stochasticity
Training
title Preconditioned Stochastic Gradient Descent
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T05%3A56%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Preconditioned%20Stochastic%20Gradient%20Descent&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Li,%20Xi-Lin&rft.date=2018-05-01&rft.volume=29&rft.issue=5&rft.spage=1454&rft.epage=1466&rft.pages=1454-1466&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2017.2672978&rft_dat=%3Cproquest_pubme%3E1883180662%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c351t-72a0d999010fe3128dc0e62db8935b8bf05b923fb00b76bc086d098b9a1cd973%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2029143168&rft_id=info:pmid/28362591&rft_ieee_id=7875097&rfr_iscdi=true