Loading…

Memory gradient method for the minimization of functions

A new accelerated gradient method for finding the minimum of a function f(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{a...

Full description

Saved in:
Bibliographic Details
Main Authors: Miele, A., Cantrell, J. W.
Format: Book Chapter
Language:English
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 263
container_issue
container_start_page 252
container_title
container_volume
creator Miele, A.
Cantrell, J. W.
description A new accelerated gradient method for finding the minimum of a function f(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \beta \delta \hat x$$\end{document} where δx is the change in the position vector x, (g(x) is the gradient of the function f(x), and α and β are scalars chosen at each step so as to yield the greatest decrease in the function. The symbol \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta \hat x$$\end{document} denotes the change in the position vector for the iteration preceding that under consideration. For a nonquadratic function, initial convergence of the present method is faster than that of the Fletcher-Reeves method because of the extra degree of freedom available. Three test problems are considered. A comparison is made between the ordinary gradient method, the Fletcher-Reeves method, and the memory gradient method.
doi_str_mv 10.1007/BFb0066685
format book_chapter
fullrecord <record><control><sourceid>springer</sourceid><recordid>TN_cdi_springer_books_10_1007_BFb0066685</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>springer_books_10_1007_BFb0066685</sourcerecordid><originalsourceid>FETCH-springer_books_10_1007_BFb00666853</originalsourceid><addsrcrecordid>eNqVjj0LwjAYhF-_wKJd_AUZXapvmiZpV0VxcXMvrU01ahNJ6qC_XgXR2eXu4O7gAZhQnFFEOV-sS0QhRMo7EGYyZTxBJmLJWRcCKqiMMpHFvW-HSRZT7EPwOvMoTVgyhND7EyLGnL0EA0i3qrHuTg6uqLQyLWlUe7QVqa0j7VGRRhvd6EfRamuIrUl9M_t39mMY1MXFq_DjI5iuV7vlJvJXp81Buby09uxzivkbPv_Bsz-mTy9ZQ2M</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>book_chapter</recordtype></control><display><type>book_chapter</type><title>Memory gradient method for the minimization of functions</title><source>SpringerLink Books Lecture Notes In Mathematics Archive</source><source>Springer Nature - Springer Lecture Notes in Mathematics eBooks</source><source>SpringerLINK Lecture Notes in Mathematics Archive (Through 1996)</source><creator>Miele, A. ; Cantrell, J. W.</creator><contributor>Contensou, M. ; Lions, J. L. ; de Veubeke, B. F. ; Krée, P. ; Moiseev, N. N. ; Balakrishnan, A. V.</contributor><creatorcontrib>Miele, A. ; Cantrell, J. W. ; Contensou, M. ; Lions, J. L. ; de Veubeke, B. F. ; Krée, P. ; Moiseev, N. N. ; Balakrishnan, A. V.</creatorcontrib><description>A new accelerated gradient method for finding the minimum of a function f(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \beta \delta \hat x$$\end{document} where δx is the change in the position vector x, (g(x) is the gradient of the function f(x), and α and β are scalars chosen at each step so as to yield the greatest decrease in the function. The symbol \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta \hat x$$\end{document} denotes the change in the position vector for the iteration preceding that under consideration. For a nonquadratic function, initial convergence of the present method is faster than that of the Fletcher-Reeves method because of the extra degree of freedom available. Three test problems are considered. A comparison is made between the ordinary gradient method, the Fletcher-Reeves method, and the memory gradient method.</description><identifier>ISSN: 0075-8434</identifier><identifier>ISBN: 9783540049210</identifier><identifier>ISBN: 3540049215</identifier><identifier>EISSN: 1617-9692</identifier><identifier>EISBN: 9783540362753</identifier><identifier>EISBN: 3540362754</identifier><identifier>DOI: 10.1007/BFb0066685</identifier><language>eng</language><publisher>Berlin, Heidelberg: Springer Berlin Heidelberg</publisher><ispartof>Symposium on Optimization, 2006, p.252-263</ispartof><rights>Springer-Verlag 1970</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><relation>Lecture Notes in Mathematics</relation></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/BFb0066685$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/BFb0066685$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>776,777,781,790,27906,37649,38024,40840,41212,41909,42281</link.rule.ids></links><search><contributor>Contensou, M.</contributor><contributor>Lions, J. L.</contributor><contributor>de Veubeke, B. F.</contributor><contributor>Krée, P.</contributor><contributor>Moiseev, N. N.</contributor><contributor>Balakrishnan, A. V.</contributor><creatorcontrib>Miele, A.</creatorcontrib><creatorcontrib>Cantrell, J. W.</creatorcontrib><title>Memory gradient method for the minimization of functions</title><title>Symposium on Optimization</title><description>A new accelerated gradient method for finding the minimum of a function f(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \beta \delta \hat x$$\end{document} where δx is the change in the position vector x, (g(x) is the gradient of the function f(x), and α and β are scalars chosen at each step so as to yield the greatest decrease in the function. The symbol \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta \hat x$$\end{document} denotes the change in the position vector for the iteration preceding that under consideration. For a nonquadratic function, initial convergence of the present method is faster than that of the Fletcher-Reeves method because of the extra degree of freedom available. Three test problems are considered. A comparison is made between the ordinary gradient method, the Fletcher-Reeves method, and the memory gradient method.</description><issn>0075-8434</issn><issn>1617-9692</issn><isbn>9783540049210</isbn><isbn>3540049215</isbn><isbn>9783540362753</isbn><isbn>3540362754</isbn><fulltext>true</fulltext><rsrctype>book_chapter</rsrctype><creationdate>2006</creationdate><recordtype>book_chapter</recordtype><sourceid/><recordid>eNqVjj0LwjAYhF-_wKJd_AUZXapvmiZpV0VxcXMvrU01ahNJ6qC_XgXR2eXu4O7gAZhQnFFEOV-sS0QhRMo7EGYyZTxBJmLJWRcCKqiMMpHFvW-HSRZT7EPwOvMoTVgyhND7EyLGnL0EA0i3qrHuTg6uqLQyLWlUe7QVqa0j7VGRRhvd6EfRamuIrUl9M_t39mMY1MXFq_DjI5iuV7vlJvJXp81Buby09uxzivkbPv_Bsz-mTy9ZQ2M</recordid><startdate>20060826</startdate><enddate>20060826</enddate><creator>Miele, A.</creator><creator>Cantrell, J. W.</creator><general>Springer Berlin Heidelberg</general><scope/></search><sort><creationdate>20060826</creationdate><title>Memory gradient method for the minimization of functions</title><author>Miele, A. ; Cantrell, J. W.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-springer_books_10_1007_BFb00666853</frbrgroupid><rsrctype>book_chapters</rsrctype><prefilter>book_chapters</prefilter><language>eng</language><creationdate>2006</creationdate><toplevel>online_resources</toplevel><creatorcontrib>Miele, A.</creatorcontrib><creatorcontrib>Cantrell, J. W.</creatorcontrib></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Miele, A.</au><au>Cantrell, J. W.</au><au>Contensou, M.</au><au>Lions, J. L.</au><au>de Veubeke, B. F.</au><au>Krée, P.</au><au>Moiseev, N. N.</au><au>Balakrishnan, A. V.</au><format>book</format><genre>bookitem</genre><ristype>CHAP</ristype><atitle>Memory gradient method for the minimization of functions</atitle><btitle>Symposium on Optimization</btitle><seriestitle>Lecture Notes in Mathematics</seriestitle><date>2006-08-26</date><risdate>2006</risdate><spage>252</spage><epage>263</epage><pages>252-263</pages><issn>0075-8434</issn><eissn>1617-9692</eissn><isbn>9783540049210</isbn><isbn>3540049215</isbn><eisbn>9783540362753</eisbn><eisbn>3540362754</eisbn><abstract>A new accelerated gradient method for finding the minimum of a function f(x) whose variables are unconstrained is investigated. The new algorithm can be stated as follows: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde x = x + \delta x,\delta x = - \alpha g(x) + \beta \delta \hat x$$\end{document} where δx is the change in the position vector x, (g(x) is the gradient of the function f(x), and α and β are scalars chosen at each step so as to yield the greatest decrease in the function. The symbol \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta \hat x$$\end{document} denotes the change in the position vector for the iteration preceding that under consideration. For a nonquadratic function, initial convergence of the present method is faster than that of the Fletcher-Reeves method because of the extra degree of freedom available. Three test problems are considered. A comparison is made between the ordinary gradient method, the Fletcher-Reeves method, and the memory gradient method.</abstract><cop>Berlin, Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/BFb0066685</doi></addata></record>
fulltext fulltext
identifier ISSN: 0075-8434
ispartof Symposium on Optimization, 2006, p.252-263
issn 0075-8434
1617-9692
language eng
recordid cdi_springer_books_10_1007_BFb0066685
source SpringerLink Books Lecture Notes In Mathematics Archive; Springer Nature - Springer Lecture Notes in Mathematics eBooks; SpringerLINK Lecture Notes in Mathematics Archive (Through 1996)
title Memory gradient method for the minimization of functions
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-21T05%3A14%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-springer&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=bookitem&rft.atitle=Memory%20gradient%20method%20for%20the%20minimization%20of%20functions&rft.btitle=Symposium%20on%20Optimization&rft.au=Miele,%20A.&rft.date=2006-08-26&rft.spage=252&rft.epage=263&rft.pages=252-263&rft.issn=0075-8434&rft.eissn=1617-9692&rft.isbn=9783540049210&rft.isbn_list=3540049215&rft_id=info:doi/10.1007/BFb0066685&rft.eisbn=9783540362753&rft.eisbn_list=3540362754&rft_dat=%3Cspringer%3Espringer_books_10_1007_BFb0066685%3C/springer%3E%3Cgrp_id%3Ecdi_FETCH-springer_books_10_1007_BFb00666853%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true