Loading…
Pseudo expected improvement criterion for parallel EGO algorithm
The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cyc...
Saved in:
Published in: | Journal of global optimization 2017-07, Vol.68 (3), p.641-662 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183 |
---|---|
cites | cdi_FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183 |
container_end_page | 662 |
container_issue | 3 |
container_start_page | 641 |
container_title | Journal of global optimization |
container_volume | 68 |
creator | Zhan, Dawei Qian, Jiachang Cheng, Yuansheng |
description | The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function. |
doi_str_mv | 10.1007/s10898-016-0484-7 |
format | article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_1907233456</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A718414373</galeid><sourcerecordid>A718414373</sourcerecordid><originalsourceid>FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183</originalsourceid><addsrcrecordid>eNp1kD9PwzAQxS0EEqXwAdgiMbuc_8XORlWVglSpDDBbduKUVEkc7BTBt8dVGFjQDSfdvd_d00PolsCCAMj7SEAVCgPJMXDFsTxDMyIkw7Qg-TmaQUEFFgDkEl3FeACAQgk6Qw8v0R0rn7mvwZWjq7KmG4L_dJ3rx6wMzehC4_us9iEbTDBt69psvdllpt37tH3vrtFFbdrobn77HL09rl9XT3i72zyvlltcMiFGzBSXlZW2lBIKrpSxBeWWO2KpIsbm1KqKCysrw3JKZZ0LpSSDgoBgoiKKzdHddDfZ-zi6OOqDP4Y-vdSkAEkZ4yJPqsWk2pvW6aav_RhMmapyXVP63tVNmi8lUZxwJlkCyASUwccYXK2H0HQmfGsC-pSsnpLVKVl9SlbLxNCJiUnb7134Y-Vf6AfDE3mC</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1907233456</pqid></control><display><type>article</type><title>Pseudo expected improvement criterion for parallel EGO algorithm</title><source>ABI/INFORM Global</source><source>Springer Link</source><creator>Zhan, Dawei ; Qian, Jiachang ; Cheng, Yuansheng</creator><creatorcontrib>Zhan, Dawei ; Qian, Jiachang ; Cheng, Yuansheng</creatorcontrib><description>The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.</description><identifier>ISSN: 0925-5001</identifier><identifier>EISSN: 1573-2916</identifier><identifier>DOI: 10.1007/s10898-016-0484-7</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Benchmarks ; Computer Science ; Computer simulation ; Computing time ; Convergence ; Criteria ; Design analysis ; Efficiency ; Global optimization ; Mathematics ; Mathematics and Statistics ; Operations Research/Decision Theory ; Optimization ; Parallel processing ; Real Functions ; State of the art</subject><ispartof>Journal of global optimization, 2017-07, Vol.68 (3), p.641-662</ispartof><rights>Springer Science+Business Media New York 2016</rights><rights>COPYRIGHT 2017 Springer</rights><rights>Journal of Global Optimization is a copyright of Springer, 2017.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183</citedby><cites>FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/1907233456/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/1907233456?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,776,780,11667,27901,27902,36037,44339,74638</link.rule.ids></links><search><creatorcontrib>Zhan, Dawei</creatorcontrib><creatorcontrib>Qian, Jiachang</creatorcontrib><creatorcontrib>Cheng, Yuansheng</creatorcontrib><title>Pseudo expected improvement criterion for parallel EGO algorithm</title><title>Journal of global optimization</title><addtitle>J Glob Optim</addtitle><description>The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.</description><subject>Algorithms</subject><subject>Benchmarks</subject><subject>Computer Science</subject><subject>Computer simulation</subject><subject>Computing time</subject><subject>Convergence</subject><subject>Criteria</subject><subject>Design analysis</subject><subject>Efficiency</subject><subject>Global optimization</subject><subject>Mathematics</subject><subject>Mathematics and Statistics</subject><subject>Operations Research/Decision Theory</subject><subject>Optimization</subject><subject>Parallel processing</subject><subject>Real Functions</subject><subject>State of the art</subject><issn>0925-5001</issn><issn>1573-2916</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp1kD9PwzAQxS0EEqXwAdgiMbuc_8XORlWVglSpDDBbduKUVEkc7BTBt8dVGFjQDSfdvd_d00PolsCCAMj7SEAVCgPJMXDFsTxDMyIkw7Qg-TmaQUEFFgDkEl3FeACAQgk6Qw8v0R0rn7mvwZWjq7KmG4L_dJ3rx6wMzehC4_us9iEbTDBt69psvdllpt37tH3vrtFFbdrobn77HL09rl9XT3i72zyvlltcMiFGzBSXlZW2lBIKrpSxBeWWO2KpIsbm1KqKCysrw3JKZZ0LpSSDgoBgoiKKzdHddDfZ-zi6OOqDP4Y-vdSkAEkZ4yJPqsWk2pvW6aav_RhMmapyXVP63tVNmi8lUZxwJlkCyASUwccYXK2H0HQmfGsC-pSsnpLVKVl9SlbLxNCJiUnb7134Y-Vf6AfDE3mC</recordid><startdate>20170701</startdate><enddate>20170701</enddate><creator>Zhan, Dawei</creator><creator>Qian, Jiachang</creator><creator>Cheng, Yuansheng</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>88I</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>M2P</scope><scope>M7S</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>Q9U</scope></search><sort><creationdate>20170701</creationdate><title>Pseudo expected improvement criterion for parallel EGO algorithm</title><author>Zhan, Dawei ; Qian, Jiachang ; Cheng, Yuansheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Algorithms</topic><topic>Benchmarks</topic><topic>Computer Science</topic><topic>Computer simulation</topic><topic>Computing time</topic><topic>Convergence</topic><topic>Criteria</topic><topic>Design analysis</topic><topic>Efficiency</topic><topic>Global optimization</topic><topic>Mathematics</topic><topic>Mathematics and Statistics</topic><topic>Operations Research/Decision Theory</topic><topic>Optimization</topic><topic>Parallel processing</topic><topic>Real Functions</topic><topic>State of the art</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhan, Dawei</creatorcontrib><creatorcontrib>Qian, Jiachang</creatorcontrib><creatorcontrib>Cheng, Yuansheng</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection</collection><collection>Science Database (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>Materials Science & Engineering Database (Proquest)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Database (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>ProQuest Research Library</collection><collection>Science Database (ProQuest)</collection><collection>ProQuest Engineering Database</collection><collection>Research Library (Corporate)</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>One Business (ProQuest)</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><jtitle>Journal of global optimization</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhan, Dawei</au><au>Qian, Jiachang</au><au>Cheng, Yuansheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Pseudo expected improvement criterion for parallel EGO algorithm</atitle><jtitle>Journal of global optimization</jtitle><stitle>J Glob Optim</stitle><date>2017-07-01</date><risdate>2017</risdate><volume>68</volume><issue>3</issue><spage>641</spage><epage>662</epage><pages>641-662</pages><issn>0925-5001</issn><eissn>1573-2916</eissn><abstract>The efficient global optimization (EGO) algorithm is famous for its high efficiency in solving computationally expensive optimization problems. However, the expected improvement (EI) criterion used for picking up candidate points in the EGO process produces only one design point per optimization cycle, which is time-wasting when parallel computing can be used. In this work, a new criterion called pseudo expected improvement (PEI) is proposed for developing parallel EGO algorithms. In each cycle, the first updating point is selected by the initial EI function. After that, the PEI function is built to approximate the real updated EI function by multiplying the initial EI function by an influence function of the updating point. The influence function is designed to simulate the impact that the updating point will have on the EI function, and is only corresponding to the position of the updating point (not the function value of the updating point). Therefore, the next updating point can be identified by maximizing the PEI function without evaluating the first updating point. As the sequential process goes on, a desired number of updating points can be selected by the PEI criterion within one optimization cycle. The efficiency of the proposed PEI criterion is validated by six benchmarks with dimension from 2 to 6. The results show that the proposed PEI algorithm performs significantly better than the standard EGO algorithm, and gains significant improvements over five of the six test problems compared against a state-of-the-art parallel EGO algorithm. Furthermore, additional experiments show that it affects the convergence of the proposed algorithm significantly when the global maximum of the PEI function is not found. It is recommended to use as much evaluations as one can afford to find the global maximum of the PEI function.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10898-016-0484-7</doi><tpages>22</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0925-5001 |
ispartof | Journal of global optimization, 2017-07, Vol.68 (3), p.641-662 |
issn | 0925-5001 1573-2916 |
language | eng |
recordid | cdi_proquest_journals_1907233456 |
source | ABI/INFORM Global; Springer Link |
subjects | Algorithms Benchmarks Computer Science Computer simulation Computing time Convergence Criteria Design analysis Efficiency Global optimization Mathematics Mathematics and Statistics Operations Research/Decision Theory Optimization Parallel processing Real Functions State of the art |
title | Pseudo expected improvement criterion for parallel EGO algorithm |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-31T08%3A19%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Pseudo%20expected%20improvement%20criterion%20for%20parallel%20EGO%20algorithm&rft.jtitle=Journal%20of%20global%20optimization&rft.au=Zhan,%20Dawei&rft.date=2017-07-01&rft.volume=68&rft.issue=3&rft.spage=641&rft.epage=662&rft.pages=641-662&rft.issn=0925-5001&rft.eissn=1573-2916&rft_id=info:doi/10.1007/s10898-016-0484-7&rft_dat=%3Cgale_proqu%3EA718414373%3C/gale_proqu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c355t-3847db7bc7709488ab924b4e1b281ab62b8d45b7da36227f6588730910535d183%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1907233456&rft_id=info:pmid/&rft_galeid=A718414373&rfr_iscdi=true |