Loading…
Deep neural networks with a set of node-wise varying activation functions
In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, th...
Saved in:
Published in: | Neural networks 2020-06, Vol.126, p.118-131 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873 |
---|---|
cites | cdi_FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873 |
container_end_page | 131 |
container_issue | |
container_start_page | 118 |
container_title | Neural networks |
container_volume | 126 |
creator | Jang, Jinhyeok Cho, Hyunjoong Kim, Jaehong Lee, Jaeyeon Yang, Seungjoon |
description | In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, the features learned by the nodes are sorted by the node indices in order of their importance such that more sensitive nodes are related to more important features. The proposed networks learn input features but also the importance of the features. Nodes with lower importance in the proposed networks can be pruned to reduce the complexity of the networks, and the pruned networks can be retrained without incurring performance losses. We validated the feature-sorting property of the proposed method using both shallow and deep networks as well as deep networks transferred from existing networks. |
doi_str_mv | 10.1016/j.neunet.2020.03.004 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2382655338</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608020300812</els_id><sourcerecordid>2382655338</sourcerecordid><originalsourceid>FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873</originalsourceid><addsrcrecordid>eNp9kLtOwzAUhi0EoqXwBgh5ZEk4thPHWZAQ10qVWLpbxjkBlzQpttOKtydVCiPTf4b_ovMRcskgZcDkzSptsW8xphw4pCBSgOyITJkqyoQXih-TKahSJBIUTMhZCCsAkCoTp2QiOAehinxK5g-IGzo0edMMEned_wx05-IHNTRgpF1N267CZOcC0q3x3659p8ZGtzXRdS2t-9buj3BOTmrTBLw46Iwsnx6X9y_J4vV5fn-3SKyQPCaykqVBMDWwrMRSmTrLi0IykCB4bgxjTJaMc8y4zaWyec1R5KVgdSWNKsSMXI-1G9999RiiXrtgsWlMi10fNBeKyzwXQg3WbLRa34XgsdYb79bDC5qB3jPUKz0y1HuGGoQeGA6xq8NC_7bG6i_0C20w3I4GHN7cOvQ6WIetxcp5tFFXnft_4Qfd64OW</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2382655338</pqid></control><display><type>article</type><title>Deep neural networks with a set of node-wise varying activation functions</title><source>ScienceDirect Freedom Collection 2022-2024</source><creator>Jang, Jinhyeok ; Cho, Hyunjoong ; Kim, Jaehong ; Lee, Jaeyeon ; Yang, Seungjoon</creator><creatorcontrib>Jang, Jinhyeok ; Cho, Hyunjoong ; Kim, Jaehong ; Lee, Jaeyeon ; Yang, Seungjoon</creatorcontrib><description>In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, the features learned by the nodes are sorted by the node indices in order of their importance such that more sensitive nodes are related to more important features. The proposed networks learn input features but also the importance of the features. Nodes with lower importance in the proposed networks can be pruned to reduce the complexity of the networks, and the pruned networks can be retrained without incurring performance losses. We validated the feature-sorting property of the proposed method using both shallow and deep networks as well as deep networks transferred from existing networks.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2020.03.004</identifier><identifier>PMID: 32203875</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Deep network ; Principal component analysis ; Pruning ; Varying activation</subject><ispartof>Neural networks, 2020-06, Vol.126, p.118-131</ispartof><rights>2020 Elsevier Ltd</rights><rights>Copyright © 2020 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873</citedby><cites>FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873</cites><orcidid>0000-0002-6840-5026 ; 0000-0003-3832-4337 ; 0000-0001-9109-1582</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/32203875$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Jang, Jinhyeok</creatorcontrib><creatorcontrib>Cho, Hyunjoong</creatorcontrib><creatorcontrib>Kim, Jaehong</creatorcontrib><creatorcontrib>Lee, Jaeyeon</creatorcontrib><creatorcontrib>Yang, Seungjoon</creatorcontrib><title>Deep neural networks with a set of node-wise varying activation functions</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, the features learned by the nodes are sorted by the node indices in order of their importance such that more sensitive nodes are related to more important features. The proposed networks learn input features but also the importance of the features. Nodes with lower importance in the proposed networks can be pruned to reduce the complexity of the networks, and the pruned networks can be retrained without incurring performance losses. We validated the feature-sorting property of the proposed method using both shallow and deep networks as well as deep networks transferred from existing networks.</description><subject>Deep network</subject><subject>Principal component analysis</subject><subject>Pruning</subject><subject>Varying activation</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kLtOwzAUhi0EoqXwBgh5ZEk4thPHWZAQ10qVWLpbxjkBlzQpttOKtydVCiPTf4b_ovMRcskgZcDkzSptsW8xphw4pCBSgOyITJkqyoQXih-TKahSJBIUTMhZCCsAkCoTp2QiOAehinxK5g-IGzo0edMMEned_wx05-IHNTRgpF1N267CZOcC0q3x3659p8ZGtzXRdS2t-9buj3BOTmrTBLw46Iwsnx6X9y_J4vV5fn-3SKyQPCaykqVBMDWwrMRSmTrLi0IykCB4bgxjTJaMc8y4zaWyec1R5KVgdSWNKsSMXI-1G9999RiiXrtgsWlMi10fNBeKyzwXQg3WbLRa34XgsdYb79bDC5qB3jPUKz0y1HuGGoQeGA6xq8NC_7bG6i_0C20w3I4GHN7cOvQ6WIetxcp5tFFXnft_4Qfd64OW</recordid><startdate>202006</startdate><enddate>202006</enddate><creator>Jang, Jinhyeok</creator><creator>Cho, Hyunjoong</creator><creator>Kim, Jaehong</creator><creator>Lee, Jaeyeon</creator><creator>Yang, Seungjoon</creator><general>Elsevier Ltd</general><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-6840-5026</orcidid><orcidid>https://orcid.org/0000-0003-3832-4337</orcidid><orcidid>https://orcid.org/0000-0001-9109-1582</orcidid></search><sort><creationdate>202006</creationdate><title>Deep neural networks with a set of node-wise varying activation functions</title><author>Jang, Jinhyeok ; Cho, Hyunjoong ; Kim, Jaehong ; Lee, Jaeyeon ; Yang, Seungjoon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Deep network</topic><topic>Principal component analysis</topic><topic>Pruning</topic><topic>Varying activation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jang, Jinhyeok</creatorcontrib><creatorcontrib>Cho, Hyunjoong</creatorcontrib><creatorcontrib>Kim, Jaehong</creatorcontrib><creatorcontrib>Lee, Jaeyeon</creatorcontrib><creatorcontrib>Yang, Seungjoon</creatorcontrib><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jang, Jinhyeok</au><au>Cho, Hyunjoong</au><au>Kim, Jaehong</au><au>Lee, Jaeyeon</au><au>Yang, Seungjoon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep neural networks with a set of node-wise varying activation functions</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2020-06</date><risdate>2020</risdate><volume>126</volume><spage>118</spage><epage>131</epage><pages>118-131</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>In this study, we present deep neural networks with a set of node-wise varying activation functions. The feature-learning abilities of the nodes are affected by the selected activation functions, where the nodes with smaller indices become increasingly more sensitive during training. As a result, the features learned by the nodes are sorted by the node indices in order of their importance such that more sensitive nodes are related to more important features. The proposed networks learn input features but also the importance of the features. Nodes with lower importance in the proposed networks can be pruned to reduce the complexity of the networks, and the pruned networks can be retrained without incurring performance losses. We validated the feature-sorting property of the proposed method using both shallow and deep networks as well as deep networks transferred from existing networks.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>32203875</pmid><doi>10.1016/j.neunet.2020.03.004</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-6840-5026</orcidid><orcidid>https://orcid.org/0000-0003-3832-4337</orcidid><orcidid>https://orcid.org/0000-0001-9109-1582</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0893-6080 |
ispartof | Neural networks, 2020-06, Vol.126, p.118-131 |
issn | 0893-6080 1879-2782 |
language | eng |
recordid | cdi_proquest_miscellaneous_2382655338 |
source | ScienceDirect Freedom Collection 2022-2024 |
subjects | Deep network Principal component analysis Pruning Varying activation |
title | Deep neural networks with a set of node-wise varying activation functions |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T12%3A43%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20neural%20networks%20with%20a%20set%20of%20node-wise%20varying%20activation%20functions&rft.jtitle=Neural%20networks&rft.au=Jang,%20Jinhyeok&rft.date=2020-06&rft.volume=126&rft.spage=118&rft.epage=131&rft.pages=118-131&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2020.03.004&rft_dat=%3Cproquest_cross%3E2382655338%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c362t-6d69ae0af0149e98af457761060325aa11169122e42c568c5f2e35931fd6a873%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2382655338&rft_id=info:pmid/32203875&rfr_iscdi=true |