Loading…

From Multitask Gradient Descent to Gradient-Free Evolutionary Multitasking: A Proof of Faster Convergence

Evolutionary multitasking, which solves multiple optimization tasks simultaneously, has gained increasing research attention in recent years. By utilizing the useful information from related tasks while solving the tasks concurrently, improved performance has been shown in various problems. Despite...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on cybernetics 2022-08, Vol.52 (8), p.8561-8573
Main Authors: Bai, Lu, Lin, Wu, Gupta, Abhishek, Ong, Yew-Soon
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Evolutionary multitasking, which solves multiple optimization tasks simultaneously, has gained increasing research attention in recent years. By utilizing the useful information from related tasks while solving the tasks concurrently, improved performance has been shown in various problems. Despite the success enjoyed by the existing evolutionary multitasking algorithms, still there is a lack of theoretical studies guaranteeing faster convergence compared to the conventional single task case. To analyze the effects of transferred information from related tasks, in this article, we first put forward a novel multitask gradient descent (MTGD) algorithm, which enhances the standard gradient descent updates with a multitask interaction term. The convergence of the resulting MTGD is derived. Furthermore, we present the first proof of faster convergence of MTGD relative to its single task counterpart. Utilizing MTGD, we formulate a gradient-free evolutionary multitasking algorithm called multitask evolution strategies (MTESs). Importantly, the single task evolution strategies (ESs) we utilize are shown to asymptotically approximate gradient descent and, hence, the faster convergence results derived for MTGD extend to the case of MTES as well. Numerical experiments comparing MTES with single task ES on synthetic benchmarks and practical optimization examples serve to substantiate our theoretical claim.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2021.3052509