Loading…

Objective Reduction in Many-Objective Optimization: Linear and Nonlinear Algorithms

The difficulties faced by existing multiobjective evolutionary algorithms (MOEAs) in handling many-objective problems relate to the inefficiency of selection operators, high computational cost, and difficulty in visualization of objective space. While many approaches aim to counter these difficultie...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on evolutionary computation 2013-02, Vol.17 (1), p.77-99
Main Authors: Saxena, D. K., Duro, J. A., Tiwari, A., Deb, K., Qingfu Zhang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The difficulties faced by existing multiobjective evolutionary algorithms (MOEAs) in handling many-objective problems relate to the inefficiency of selection operators, high computational cost, and difficulty in visualization of objective space. While many approaches aim to counter these difficulties by increasing the fidelity of the standard selection operators, the objective reduction approach attempts to eliminate objectives that are not essential to describe the Pareto-optimal front (POF). If the number of essential objectives is found to be two or three, the problem could be solved by the existing MOEAs. It implies that objective reduction could make an otherwise unsolvable (many-objective) problem solvable. Even when the essential objectives are four or more, the reduced representation of the problem will have favorable impact on the search efficiency, computational cost, and decision-making. Hence, development of generic and robust objective reduction approaches becomes important. This paper presents a principal component analysis and maximum variance unfolding based framework for linear and nonlinear objective reduction algorithms, respectively. The major contribution of this paper includes: 1) the enhancements in the core components of the framework for higher robustness in terms of applicability to a range of problems with disparate degree of redundancy; mechanisms to handle input data that poorly approximates the true POF; and dependence on fewer parameters to minimize the variability in performance; 2) proposition of an error measure to assess the quality of results; 3) sensitivity analysis of the proposed algorithms for the critical parameter involved, and the characteristics of the input data; and 4) study of the performance of the proposed algorithms vis-à-vis dominance relation preservation based algorithms, on a wide range of test problems (scaled up to 50 objectives) and two real-world problems.
ISSN:1089-778X
1941-0026
DOI:10.1109/TEVC.2012.2185847