Loading…

Evolutionary Multitasking for Large-Scale Multiobjective Optimization

Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO t...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on evolutionary computation 2023-08, Vol.27 (4), p.863-877
Main Authors: Liu, Songbai, Lin, Qiuzhen, Feng, Liang, Wong, Ka-Chun, Tan, Kay Chen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Evolutionary transfer optimization (ETO) has been becoming a hot research topic in the field of evolutionary computation, which is based on the fact that knowledge learning and transfer across the related optimization exercises can improve the efficiency of others. However, rare studies employ ETO to solve large-scale multiobjective optimization problems (LMOPs). To fill this research gap, this article proposes a new multitasking ETO algorithm via a powerful transfer learning model to simultaneously solve multiple LMOPs. In particular, inspired by adversarial domain adaptation in transfer learning, a discriminative reconstruction network (DRN) model (containing an encoder, a decoder, and a classifier) is created for each LMOP. At each generation, the DRN is trained by the currently obtained nondominated solutions for all LMOPs via backpropagation with gradient descent. With this well-trained DRN model, the proposed algorithm can transfer the solutions of source LMOPs directly to the target LMOP for assisting its optimization, can evaluate the correlation between the source and target LMOPs to control the transfer of solutions, and can learn a dimensional-reduced Pareto-optimal subspace of the target LMOP to improve the efficiency of transfer optimization in the large-scale search space. Moreover, we propose a real-world multitasking LMOP suite to simulate the training of deep neural networks (DNNs) on multiple different classification tasks. Finally, the effectiveness of the proposed algorithm has been validated in this real-world problem suite and the other two synthetic problem suites.
ISSN:1089-778X
1941-0026
DOI:10.1109/TEVC.2022.3166482