Loading…

Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search

Neural Architecture Search (NAS), that automates the design process of high-performing neural network architectures, is a multi-objective optimization problem. A single ideal architecture, that optimizes both predictive performance (e.g., the network accuracy) and computational costs (e.g., the mode...

Full description

Saved in:
Bibliographic Details
Published in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-04, Vol.53 (8), p.8654-8672
Main Authors: Phan, Quan Minh, Luong, Ngoc Hoang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163
cites cdi_FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163
container_end_page 8672
container_issue 8
container_start_page 8654
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 53
creator Phan, Quan Minh
Luong, Ngoc Hoang
description Neural Architecture Search (NAS), that automates the design process of high-performing neural network architectures, is a multi-objective optimization problem. A single ideal architecture, that optimizes both predictive performance (e.g., the network accuracy) and computational costs (e.g., the model size, the number of parameters, the number of floating-point operations), does not exist. Instead, there is a Pareto front of multiple candidate architectures where each one represents an optimal trade-off between the competing objectives. Multi-Objective Evolutionary Algorithms (MOEAs) are often employed to approximate such Pareto-optimal fronts for NAS problems. In this article, we introduce a local search method, namely Potential Solution Improving (PSI), that aims to improve certain potential solutions on approximation fronts to enhance the performance of MOEAs. The main bottleneck in NAS is the considerable computation cost that incurs from having to train a large number of candidate architectures to evaluate their accuracy. Recently, the Synaptic Flow has been proposed as a metric that relatively characterizes the performance of deep neural networks without running any training epoch. We thus propose that our PSI method can make use of this training-free metric as a proxy for network accuracy during local search steps. We conduct experiments with the well-known MOEA Non-dominated Sorting Genetic Algorithm II (NSGA-II) coupled with the training-free PSI local search in solving NAS problems created from the standard benchmarks NAS-Bench-101 and NAS-Bench-201. Experimental results confirm the efficiency enhancements brought about by our proposed method, which reduces the computational cost by four times compared to the baseline approach. The source code for the experiments in the article can be found at: https://github.com/ELO-Lab/MOENAS-TF-PSI .
doi_str_mv 10.1007/s10489-022-04032-y
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2807781385</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2807781385</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKt_wFPAc3SSbJvkKFI_oKAHBW8hTSftlu2uJtlK_72pK3jzNAw8zzvMS8glh2sOoG4Sh0obBkIwqEAKtj8iIz5RkqnKqGMyAiMqNp2a91NyltIGAKQEPiLrWbt2ra_bFd32Ta5Zt9igz_UOKe66ps9117q4py320TXURb-ucwH6iDThYaVfdV7THF3dlhQWIiJ9cRFzR5vOF2fAzslJcE3Ci985Jm_3s9e7RzZ_fni6u50zL7nJzFVGLytjgoKgKxRegEfpjQtSSq6MBsODXuip5oEvlVlIJVDqiYMlBORTOSZXQ-5H7D57TNluuj625aQVGpTSvNCFEgPlY5dSxGA_Yr0tj1oO9tCoHRq1pVH706jdF0kOUipwu8L4F_2P9Q3gKXtl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2807781385</pqid></control><display><type>article</type><title>Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search</title><source>ABI/INFORM global</source><source>Springer Nature</source><creator>Phan, Quan Minh ; Luong, Ngoc Hoang</creator><creatorcontrib>Phan, Quan Minh ; Luong, Ngoc Hoang</creatorcontrib><description>Neural Architecture Search (NAS), that automates the design process of high-performing neural network architectures, is a multi-objective optimization problem. A single ideal architecture, that optimizes both predictive performance (e.g., the network accuracy) and computational costs (e.g., the model size, the number of parameters, the number of floating-point operations), does not exist. Instead, there is a Pareto front of multiple candidate architectures where each one represents an optimal trade-off between the competing objectives. Multi-Objective Evolutionary Algorithms (MOEAs) are often employed to approximate such Pareto-optimal fronts for NAS problems. In this article, we introduce a local search method, namely Potential Solution Improving (PSI), that aims to improve certain potential solutions on approximation fronts to enhance the performance of MOEAs. The main bottleneck in NAS is the considerable computation cost that incurs from having to train a large number of candidate architectures to evaluate their accuracy. Recently, the Synaptic Flow has been proposed as a metric that relatively characterizes the performance of deep neural networks without running any training epoch. We thus propose that our PSI method can make use of this training-free metric as a proxy for network accuracy during local search steps. We conduct experiments with the well-known MOEA Non-dominated Sorting Genetic Algorithm II (NSGA-II) coupled with the training-free PSI local search in solving NAS problems created from the standard benchmarks NAS-Bench-101 and NAS-Bench-201. Experimental results confirm the efficiency enhancements brought about by our proposed method, which reduces the computational cost by four times compared to the baseline approach. The source code for the experiments in the article can be found at: https://github.com/ELO-Lab/MOENAS-TF-PSI .</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-022-04032-y</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Accuracy ; Artificial Intelligence ; Artificial neural networks ; Computer architecture ; Computer Science ; Computing costs ; Emerging Topics in Artificial Intelligence Selected from IEA/AIE2021 ; Evolutionary algorithms ; Floating point arithmetic ; Genetic algorithms ; Machines ; Manufacturing ; Mechanical Engineering ; Multiple objective analysis ; Neural networks ; Pareto optimization ; Pareto optimum ; Performance prediction ; Processes ; Search methods ; Sorting algorithms ; Source code ; Training</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2023-04, Vol.53 (8), p.8654-8672</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163</citedby><cites>FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163</cites><orcidid>0000-0002-6768-1950</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2807781385/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2807781385?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,11688,27924,27925,36060,44363,74895</link.rule.ids></links><search><creatorcontrib>Phan, Quan Minh</creatorcontrib><creatorcontrib>Luong, Ngoc Hoang</creatorcontrib><title>Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Neural Architecture Search (NAS), that automates the design process of high-performing neural network architectures, is a multi-objective optimization problem. A single ideal architecture, that optimizes both predictive performance (e.g., the network accuracy) and computational costs (e.g., the model size, the number of parameters, the number of floating-point operations), does not exist. Instead, there is a Pareto front of multiple candidate architectures where each one represents an optimal trade-off between the competing objectives. Multi-Objective Evolutionary Algorithms (MOEAs) are often employed to approximate such Pareto-optimal fronts for NAS problems. In this article, we introduce a local search method, namely Potential Solution Improving (PSI), that aims to improve certain potential solutions on approximation fronts to enhance the performance of MOEAs. The main bottleneck in NAS is the considerable computation cost that incurs from having to train a large number of candidate architectures to evaluate their accuracy. Recently, the Synaptic Flow has been proposed as a metric that relatively characterizes the performance of deep neural networks without running any training epoch. We thus propose that our PSI method can make use of this training-free metric as a proxy for network accuracy during local search steps. We conduct experiments with the well-known MOEA Non-dominated Sorting Genetic Algorithm II (NSGA-II) coupled with the training-free PSI local search in solving NAS problems created from the standard benchmarks NAS-Bench-101 and NAS-Bench-201. Experimental results confirm the efficiency enhancements brought about by our proposed method, which reduces the computational cost by four times compared to the baseline approach. The source code for the experiments in the article can be found at: https://github.com/ELO-Lab/MOENAS-TF-PSI .</description><subject>Accuracy</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer architecture</subject><subject>Computer Science</subject><subject>Computing costs</subject><subject>Emerging Topics in Artificial Intelligence Selected from IEA/AIE2021</subject><subject>Evolutionary algorithms</subject><subject>Floating point arithmetic</subject><subject>Genetic algorithms</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Mechanical Engineering</subject><subject>Multiple objective analysis</subject><subject>Neural networks</subject><subject>Pareto optimization</subject><subject>Pareto optimum</subject><subject>Performance prediction</subject><subject>Processes</subject><subject>Search methods</subject><subject>Sorting algorithms</subject><subject>Source code</subject><subject>Training</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kE1LAzEQhoMoWKt_wFPAc3SSbJvkKFI_oKAHBW8hTSftlu2uJtlK_72pK3jzNAw8zzvMS8glh2sOoG4Sh0obBkIwqEAKtj8iIz5RkqnKqGMyAiMqNp2a91NyltIGAKQEPiLrWbt2ra_bFd32Ta5Zt9igz_UOKe66ps9117q4py320TXURb-ucwH6iDThYaVfdV7THF3dlhQWIiJ9cRFzR5vOF2fAzslJcE3Ci985Jm_3s9e7RzZ_fni6u50zL7nJzFVGLytjgoKgKxRegEfpjQtSSq6MBsODXuip5oEvlVlIJVDqiYMlBORTOSZXQ-5H7D57TNluuj625aQVGpTSvNCFEgPlY5dSxGA_Yr0tj1oO9tCoHRq1pVH706jdF0kOUipwu8L4F_2P9Q3gKXtl</recordid><startdate>20230401</startdate><enddate>20230401</enddate><creator>Phan, Quan Minh</creator><creator>Luong, Ngoc Hoang</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0002-6768-1950</orcidid></search><sort><creationdate>20230401</creationdate><title>Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search</title><author>Phan, Quan Minh ; Luong, Ngoc Hoang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer architecture</topic><topic>Computer Science</topic><topic>Computing costs</topic><topic>Emerging Topics in Artificial Intelligence Selected from IEA/AIE2021</topic><topic>Evolutionary algorithms</topic><topic>Floating point arithmetic</topic><topic>Genetic algorithms</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Mechanical Engineering</topic><topic>Multiple objective analysis</topic><topic>Neural networks</topic><topic>Pareto optimization</topic><topic>Pareto optimum</topic><topic>Performance prediction</topic><topic>Processes</topic><topic>Search methods</topic><topic>Sorting algorithms</topic><topic>Source code</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Phan, Quan Minh</creatorcontrib><creatorcontrib>Luong, Ngoc Hoang</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer science database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Phan, Quan Minh</au><au>Luong, Ngoc Hoang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2023-04-01</date><risdate>2023</risdate><volume>53</volume><issue>8</issue><spage>8654</spage><epage>8672</epage><pages>8654-8672</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Neural Architecture Search (NAS), that automates the design process of high-performing neural network architectures, is a multi-objective optimization problem. A single ideal architecture, that optimizes both predictive performance (e.g., the network accuracy) and computational costs (e.g., the model size, the number of parameters, the number of floating-point operations), does not exist. Instead, there is a Pareto front of multiple candidate architectures where each one represents an optimal trade-off between the competing objectives. Multi-Objective Evolutionary Algorithms (MOEAs) are often employed to approximate such Pareto-optimal fronts for NAS problems. In this article, we introduce a local search method, namely Potential Solution Improving (PSI), that aims to improve certain potential solutions on approximation fronts to enhance the performance of MOEAs. The main bottleneck in NAS is the considerable computation cost that incurs from having to train a large number of candidate architectures to evaluate their accuracy. Recently, the Synaptic Flow has been proposed as a metric that relatively characterizes the performance of deep neural networks without running any training epoch. We thus propose that our PSI method can make use of this training-free metric as a proxy for network accuracy during local search steps. We conduct experiments with the well-known MOEA Non-dominated Sorting Genetic Algorithm II (NSGA-II) coupled with the training-free PSI local search in solving NAS problems created from the standard benchmarks NAS-Bench-101 and NAS-Bench-201. Experimental results confirm the efficiency enhancements brought about by our proposed method, which reduces the computational cost by four times compared to the baseline approach. The source code for the experiments in the article can be found at: https://github.com/ELO-Lab/MOENAS-TF-PSI .</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-022-04032-y</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0002-6768-1950</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2023-04, Vol.53 (8), p.8654-8672
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2807781385
source ABI/INFORM global; Springer Nature
subjects Accuracy
Artificial Intelligence
Artificial neural networks
Computer architecture
Computer Science
Computing costs
Emerging Topics in Artificial Intelligence Selected from IEA/AIE2021
Evolutionary algorithms
Floating point arithmetic
Genetic algorithms
Machines
Manufacturing
Mechanical Engineering
Multiple objective analysis
Neural networks
Pareto optimization
Pareto optimum
Performance prediction
Processes
Search methods
Sorting algorithms
Source code
Training
title Enhancing multi-objective evolutionary neural architecture search with training-free Pareto local search
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T05%3A09%3A47IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhancing%20multi-objective%20evolutionary%20neural%20architecture%20search%20with%20training-free%20Pareto%20local%20search&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Phan,%20Quan%20Minh&rft.date=2023-04-01&rft.volume=53&rft.issue=8&rft.spage=8654&rft.epage=8672&rft.pages=8654-8672&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-022-04032-y&rft_dat=%3Cproquest_cross%3E2807781385%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-a498d499f70f84e2c20ce3c9af3331798091f8b8681f1d79b372e385a0d0fe163%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2807781385&rft_id=info:pmid/&rfr_iscdi=true