Loading…

Evolutionary Multitasking for Large-Scale Multiobjective Optimization

Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO t...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on evolutionary computation 2023-08, Vol.27 (4), p.863-877
Main Authors: Liu, Songbai, Lin, Qiuzhen, Feng, Liang, Wong, Ka-Chun, Tan, Kay Chen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553
cites cdi_FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553
container_end_page 877
container_issue 4
container_start_page 863
container_title IEEE transactions on evolutionary computation
container_volume 27
creator Liu, Songbai
Lin, Qiuzhen
Feng, Liang
Wong, Ka-Chun
Tan, Kay Chen
description Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.
doi_str_mv 10.1109/TEVC.2022.3166482
format article
fullrecord <record><control><sourceid>proquest_CHZPO</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TEVC_2022_3166482</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9755945</ieee_id><sourcerecordid>2844890390</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553</originalsourceid><addsrcrecordid>eNo9kE1Lw0AQhhdRsFZ_gHgJeE6d_cruHqXEKlR6sIq3ZZtOyta0WzdJQX-9CSnCwAwz7zvDPITcUphQCuZhmX9MJwwYm3CaZUKzMzKiRtAUgGXnXQ3apErpz0tyVddbACokNSOS58dQtY0Pexd_kte2anzj6i-_3yRliMncxQ2mb4WrcBiG1RaLxh8xWRwav_O_rvdek4vSVTXenPKYvD_ly-lzOl_MXqaP87RghjepdshBlBzQFAqUE44pxplE2nXNCqHArMycWiunTAGS6zWqUgKjUjCQko_J_bD3EMN3i3Vjt6GN--6kZVoIbYB3MSZ0UBUx1HXE0h6i33X_WQq2p2V7WranZU-0Os_d4PGI-K83SkojJP8DqBtl5Q</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2844890390</pqid></control><display><type>article</type><title>Evolutionary Multitasking for Large-Scale Multiobjective Optimization</title><source>IEEE Xplore All Conference Series</source><creator>Liu, Songbai ; Lin, Qiuzhen ; Feng, Liang ; Wong, Ka-Chun ; Tan, Kay Chen</creator><creatorcontrib>Liu, Songbai ; Lin, Qiuzhen ; Feng, Liang ; Wong, Ka-Chun ; Tan, Kay Chen</creatorcontrib><description>Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.</description><identifier>ISSN: 1089-778X</identifier><identifier>EISSN: 1941-0026</identifier><identifier>DOI: 10.1109/TEVC.2022.3166482</identifier><identifier>CODEN: ITEVF5</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Artificial neural networks ; Back propagation networks ; Coders ; Computer science ; Evolutionary algorithm (EA) ; Evolutionary computation ; Knowledge management ; large-scale multiobjective optimization ; Machine learning ; Multiple objective analysis ; Multitasking ; Optimization ; Pareto optimization ; Sociology ; Task analysis ; Training ; Transfer learning</subject><ispartof>IEEE transactions on evolutionary computation, 2023-08, Vol.27 (4), p.863-877</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553</citedby><cites>FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553</cites><orcidid>0000-0002-6802-2463 ; 0000-0003-1048-4486 ; 0000-0003-2415-0401 ; 0000-0001-6062-733X ; 0000-0002-8356-7242</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9755945$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54555,54796,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9755945$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Songbai</creatorcontrib><creatorcontrib>Lin, Qiuzhen</creatorcontrib><creatorcontrib>Feng, Liang</creatorcontrib><creatorcontrib>Wong, Ka-Chun</creatorcontrib><creatorcontrib>Tan, Kay Chen</creatorcontrib><title>Evolutionary Multitasking for Large-Scale Multiobjective Optimization</title><title>IEEE transactions on evolutionary computation</title><addtitle>TEVC</addtitle><description>Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Back propagation networks</subject><subject>Coders</subject><subject>Computer science</subject><subject>Evolutionary algorithm (EA)</subject><subject>Evolutionary computation</subject><subject>Knowledge management</subject><subject>large-scale multiobjective optimization</subject><subject>Machine learning</subject><subject>Multiple objective analysis</subject><subject>Multitasking</subject><subject>Optimization</subject><subject>Pareto optimization</subject><subject>Sociology</subject><subject>Task analysis</subject><subject>Training</subject><subject>Transfer learning</subject><issn>1089-778X</issn><issn>1941-0026</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNo9kE1Lw0AQhhdRsFZ_gHgJeE6d_cruHqXEKlR6sIq3ZZtOyta0WzdJQX-9CSnCwAwz7zvDPITcUphQCuZhmX9MJwwYm3CaZUKzMzKiRtAUgGXnXQ3apErpz0tyVddbACokNSOS58dQtY0Pexd_kte2anzj6i-_3yRliMncxQ2mb4WrcBiG1RaLxh8xWRwav_O_rvdek4vSVTXenPKYvD_ly-lzOl_MXqaP87RghjepdshBlBzQFAqUE44pxplE2nXNCqHArMycWiunTAGS6zWqUgKjUjCQko_J_bD3EMN3i3Vjt6GN--6kZVoIbYB3MSZ0UBUx1HXE0h6i33X_WQq2p2V7WranZU-0Os_d4PGI-K83SkojJP8DqBtl5Q</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Liu, Songbai</creator><creator>Lin, Qiuzhen</creator><creator>Feng, Liang</creator><creator>Wong, Ka-Chun</creator><creator>Tan, Kay Chen</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6802-2463</orcidid><orcidid>https://orcid.org/0000-0003-1048-4486</orcidid><orcidid>https://orcid.org/0000-0003-2415-0401</orcidid><orcidid>https://orcid.org/0000-0001-6062-733X</orcidid><orcidid>https://orcid.org/0000-0002-8356-7242</orcidid></search><sort><creationdate>20230801</creationdate><title>Evolutionary Multitasking for Large-Scale Multiobjective Optimization</title><author>Liu, Songbai ; Lin, Qiuzhen ; Feng, Liang ; Wong, Ka-Chun ; Tan, Kay Chen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Back propagation networks</topic><topic>Coders</topic><topic>Computer science</topic><topic>Evolutionary algorithm (EA)</topic><topic>Evolutionary computation</topic><topic>Knowledge management</topic><topic>large-scale multiobjective optimization</topic><topic>Machine learning</topic><topic>Multiple objective analysis</topic><topic>Multitasking</topic><topic>Optimization</topic><topic>Pareto optimization</topic><topic>Sociology</topic><topic>Task analysis</topic><topic>Training</topic><topic>Transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Songbai</creatorcontrib><creatorcontrib>Lin, Qiuzhen</creatorcontrib><creatorcontrib>Feng, Liang</creatorcontrib><creatorcontrib>Wong, Ka-Chun</creatorcontrib><creatorcontrib>Tan, Kay Chen</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on evolutionary computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Songbai</au><au>Lin, Qiuzhen</au><au>Feng, Liang</au><au>Wong, Ka-Chun</au><au>Tan, Kay Chen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Evolutionary Multitasking for Large-Scale Multiobjective Optimization</atitle><jtitle>IEEE transactions on evolutionary computation</jtitle><stitle>TEVC</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>27</volume><issue>4</issue><spage>863</spage><epage>877</epage><pages>863-877</pages><issn>1089-778X</issn><eissn>1941-0026</eissn><coden>ITEVF5</coden><abstract>Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TEVC.2022.3166482</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0002-6802-2463</orcidid><orcidid>https://orcid.org/0000-0003-1048-4486</orcidid><orcidid>https://orcid.org/0000-0003-2415-0401</orcidid><orcidid>https://orcid.org/0000-0001-6062-733X</orcidid><orcidid>https://orcid.org/0000-0002-8356-7242</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1089-778X
ispartof IEEE transactions on evolutionary computation, 2023-08, Vol.27 (4), p.863-877
issn 1089-778X
1941-0026
language eng
recordid cdi_crossref_primary_10_1109_TEVC_2022_3166482
source IEEE Xplore All Conference Series
subjects Algorithms
Artificial neural networks
Back propagation networks
Coders
Computer science
Evolutionary algorithm (EA)
Evolutionary computation
Knowledge management
large-scale multiobjective optimization
Machine learning
Multiple objective analysis
Multitasking
Optimization
Pareto optimization
Sociology
Task analysis
Training
Transfer learning
title Evolutionary Multitasking for Large-Scale Multiobjective Optimization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T18%3A49%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Evolutionary%20Multitasking%20for%20Large-Scale%20Multiobjective%20Optimization&rft.jtitle=IEEE%20transactions%20on%20evolutionary%20computation&rft.au=Liu,%20Songbai&rft.date=2023-08-01&rft.volume=27&rft.issue=4&rft.spage=863&rft.epage=877&rft.pages=863-877&rft.issn=1089-778X&rft.eissn=1941-0026&rft.coden=ITEVF5&rft_id=info:doi/10.1109/TEVC.2022.3166482&rft_dat=%3Cproquest_CHZPO%3E2844890390%3C/proquest_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c293t-8ae304f30e9c707a4a272325e104f9be0ce6f6a7d7a79c0538de7f50215420553%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2844890390&rft_id=info:pmid/&rft_ieee_id=9755945&rfr_iscdi=true