Loading…

BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search

Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2019-12
Main Authors: Fang, Muyuan, Wang, Qiang, Zhao, Zhong
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Fang, Muyuan
Wang, Qiang
Zhao, Zhong
description Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2330496937</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2330496937</sourcerecordid><originalsourceid>FETCH-proquest_journals_23304969373</originalsourceid><addsrcrecordid>eNqNjr0KwjAYRYMgWLTv8IFzISb9sW6pVBykS7uX0KaaEpKaND6_GXwAp8Pl3OFsUEQoPSXnlJAdip2bMcYkL0iW0Qg9qrpjDWsvUHHF9VCP0FkmtdRP4HoEJ5QYVvkRMFqzwGQsNMJbroDZ4SXXIL0V0Aoe5gFtJ66ciH_co-Ot7q73ZLHm7YVb-9l4q4PqQxJOy7ykBf3v9QVt6jwe</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2330496937</pqid></control><display><type>article</type><title>BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search</title><source>Publicly Available Content Database</source><creator>Fang, Muyuan ; Wang, Qiang ; Zhao, Zhong</creator><creatorcontrib>Fang, Muyuan ; Wang, Qiang ; Zhao, Zhong</creatorcontrib><description>Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Architecture ; Machine learning ; Searching ; Training ; Weight</subject><ispartof>arXiv.org, 2019-12</ispartof><rights>2019. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2330496937?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Fang, Muyuan</creatorcontrib><creatorcontrib>Wang, Qiang</creatorcontrib><creatorcontrib>Zhao, Zhong</creatorcontrib><title>BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search</title><title>arXiv.org</title><description>Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.</description><subject>Architecture</subject><subject>Machine learning</subject><subject>Searching</subject><subject>Training</subject><subject>Weight</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjr0KwjAYRYMgWLTv8IFzISb9sW6pVBykS7uX0KaaEpKaND6_GXwAp8Pl3OFsUEQoPSXnlJAdip2bMcYkL0iW0Qg9qrpjDWsvUHHF9VCP0FkmtdRP4HoEJ5QYVvkRMFqzwGQsNMJbroDZ4SXXIL0V0Aoe5gFtJ66ciH_co-Ot7q73ZLHm7YVb-9l4q4PqQxJOy7ykBf3v9QVt6jwe</recordid><startdate>20191224</startdate><enddate>20191224</enddate><creator>Fang, Muyuan</creator><creator>Wang, Qiang</creator><creator>Zhao, Zhong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20191224</creationdate><title>BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search</title><author>Fang, Muyuan ; Wang, Qiang ; Zhao, Zhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_23304969373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Architecture</topic><topic>Machine learning</topic><topic>Searching</topic><topic>Training</topic><topic>Weight</topic><toplevel>online_resources</toplevel><creatorcontrib>Fang, Muyuan</creatorcontrib><creatorcontrib>Wang, Qiang</creatorcontrib><creatorcontrib>Zhao, Zhong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fang, Muyuan</au><au>Wang, Qiang</au><au>Zhao, Zhong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search</atitle><jtitle>arXiv.org</jtitle><date>2019-12-24</date><risdate>2019</risdate><eissn>2331-8422</eissn><abstract>Automatic neural architecture search techniques are becoming increasingly important in machine learning area. Especially, weight sharing methods have shown remarkable potentials on searching good network architectures with few computational resources. However, existing weight sharing methods mainly suffer limitations on searching strategies: these methods either uniformly train all network paths to convergence which introduces conflicts between branches and wastes a large amount of computation on unpromising candidates, or selectively train branches with different frequency which leads to unfair evaluation and comparison among paths. To address these issues, we propose a novel neural architecture search method with balanced training strategy to ensure fair comparisons and a selective drop mechanism to reduce conflicts among candidate paths. The experimental results show that our proposed method can achieve a leading performance of 79.0% on ImageNet under mobile settings, which outperforms other state-of-the-art methods in both accuracy and efficiency.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2019-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2330496937
source Publicly Available Content Database
subjects Architecture
Machine learning
Searching
Training
Weight
title BETANAS: BalancEd TrAining and selective drop for Neural Architecture Search
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T07%3A46%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=BETANAS:%20BalancEd%20TrAining%20and%20selective%20drop%20for%20Neural%20Architecture%20Search&rft.jtitle=arXiv.org&rft.au=Fang,%20Muyuan&rft.date=2019-12-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2330496937%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_23304969373%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2330496937&rft_id=info:pmid/&rfr_iscdi=true