Loading…

Dynamic connection pruning for densely connected convolutional neural networks

Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which hinders their deployment on mobile devices. In this paper, we propose a dynamic connection p...

Full description

Saved in:
Bibliographic Details
Published in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-08, Vol.53 (16), p.19505-19521
Main Authors: Hu, Xinyi, Fang, Hangxiang, Zhang, Ling, Zhang, Xue, Yang, Howard H., Yang, Dongxiao, Peng, Bo, Li, Zheyang, Hu, Haoji
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633
cites cdi_FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633
container_end_page 19521
container_issue 16
container_start_page 19505
container_title Applied intelligence (Dordrecht, Netherlands)
container_volume 53
creator Hu, Xinyi
Fang, Hangxiang
Zhang, Ling
Zhang, Xue
Yang, Howard H.
Yang, Dongxiao
Peng, Bo
Li, Zheyang
Hu, Haoji
description Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which hinders their deployment on mobile devices. In this paper, we propose a dynamic connection pruning algorithm, which is a cost-effective method to eliminate a large amount of redundancy in densely connected networks. First, we propose a Sample-Evaluation process to assess the contributions of connections. Specifically, sub-networks are sampled from the unpruned network in each epoch, while the parameters of the unpruned network are subsequently updated and the contributions of the connections are evaluated based on the performance of the sub-networks. Connections with low contribution will be pruned first. Then, we search for the distribution of pruning ratios by the Markov process. Finally, we prune the network based on the connection contribution and pruning ratios learned in the above two stages and obtain a lightweight network. The effectiveness of our method is verified on both high-level and low-level tasks. On the CIFAR-10 dataset, the top-1 accuracy barely drops (-0.03%) when FLOPs are reduced by 46.8%. In the super-resolution task, our model remarkably outperforms other lightweight networks in both visual and quantitative experiments. These results verify the effectiveness and generality of our proposed method.
doi_str_mv 10.1007/s10489-023-04513-8
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2858085790</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2858085790</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633</originalsourceid><addsrcrecordid>eNp9kEtLxDAUhYMoOI7-AVcF19Gbps1jKeMTBt0ouAtpmgwdO8mYtEr_vZ2p4s7VOXC_c7gchM4JXBIAfpUIFEJiyCmGoiQUiwM0IyWnmBeSH6IZyLzAjMm3Y3SS0hoAKAUyQ083g9ebxmQmeG9N1wSfbWPvG7_KXIhZbX2y7fB7tvXOfYa235G6zbzt4166rxDf0yk6crpN9uxH5-j17vZl8YCXz_ePi-slNpTIDjNgBWdVLYCVzjpwTjqjCYNS18bpypaOSQ2VYbwSuWFGaEOBsdGPhlE6RxdT7zaGj96mTq1DH8eHkspFKUCUXMJI5RNlYkgpWqe2sdnoOCgCarebmnZT425qv5sSY4hOoTTCfmXjX_U_qW9DYnKT</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2858085790</pqid></control><display><type>article</type><title>Dynamic connection pruning for densely connected convolutional neural networks</title><source>ABI/INFORM Global</source><source>Springer Link</source><creator>Hu, Xinyi ; Fang, Hangxiang ; Zhang, Ling ; Zhang, Xue ; Yang, Howard H. ; Yang, Dongxiao ; Peng, Bo ; Li, Zheyang ; Hu, Haoji</creator><creatorcontrib>Hu, Xinyi ; Fang, Hangxiang ; Zhang, Ling ; Zhang, Xue ; Yang, Howard H. ; Yang, Dongxiao ; Peng, Bo ; Li, Zheyang ; Hu, Haoji</creatorcontrib><description>Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which hinders their deployment on mobile devices. In this paper, we propose a dynamic connection pruning algorithm, which is a cost-effective method to eliminate a large amount of redundancy in densely connected networks. First, we propose a Sample-Evaluation process to assess the contributions of connections. Specifically, sub-networks are sampled from the unpruned network in each epoch, while the parameters of the unpruned network are subsequently updated and the contributions of the connections are evaluated based on the performance of the sub-networks. Connections with low contribution will be pruned first. Then, we search for the distribution of pruning ratios by the Markov process. Finally, we prune the network based on the connection contribution and pruning ratios learned in the above two stages and obtain a lightweight network. The effectiveness of our method is verified on both high-level and low-level tasks. On the CIFAR-10 dataset, the top-1 accuracy barely drops (-0.03%) when FLOPs are reduced by 46.8%. In the super-resolution task, our model remarkably outperforms other lightweight networks in both visual and quantitative experiments. These results verify the effectiveness and generality of our proposed method.</description><identifier>ISSN: 0924-669X</identifier><identifier>EISSN: 1573-7497</identifier><identifier>DOI: 10.1007/s10489-023-04513-8</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Algorithms ; Artificial Intelligence ; Artificial neural networks ; Computer Science ; Effectiveness ; Lightweight ; Machines ; Manufacturing ; Markov processes ; Mechanical Engineering ; Neural networks ; Processes ; Redundancy</subject><ispartof>Applied intelligence (Dordrecht, Netherlands), 2023-08, Vol.53 (16), p.19505-19521</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633</citedby><cites>FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633</cites><orcidid>0000-0001-6048-6549</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2858085790/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2858085790?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,11688,27924,27925,36060,44363,74895</link.rule.ids></links><search><creatorcontrib>Hu, Xinyi</creatorcontrib><creatorcontrib>Fang, Hangxiang</creatorcontrib><creatorcontrib>Zhang, Ling</creatorcontrib><creatorcontrib>Zhang, Xue</creatorcontrib><creatorcontrib>Yang, Howard H.</creatorcontrib><creatorcontrib>Yang, Dongxiao</creatorcontrib><creatorcontrib>Peng, Bo</creatorcontrib><creatorcontrib>Li, Zheyang</creatorcontrib><creatorcontrib>Hu, Haoji</creatorcontrib><title>Dynamic connection pruning for densely connected convolutional neural networks</title><title>Applied intelligence (Dordrecht, Netherlands)</title><addtitle>Appl Intell</addtitle><description>Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which hinders their deployment on mobile devices. In this paper, we propose a dynamic connection pruning algorithm, which is a cost-effective method to eliminate a large amount of redundancy in densely connected networks. First, we propose a Sample-Evaluation process to assess the contributions of connections. Specifically, sub-networks are sampled from the unpruned network in each epoch, while the parameters of the unpruned network are subsequently updated and the contributions of the connections are evaluated based on the performance of the sub-networks. Connections with low contribution will be pruned first. Then, we search for the distribution of pruning ratios by the Markov process. Finally, we prune the network based on the connection contribution and pruning ratios learned in the above two stages and obtain a lightweight network. The effectiveness of our method is verified on both high-level and low-level tasks. On the CIFAR-10 dataset, the top-1 accuracy barely drops (-0.03%) when FLOPs are reduced by 46.8%. In the super-resolution task, our model remarkably outperforms other lightweight networks in both visual and quantitative experiments. These results verify the effectiveness and generality of our proposed method.</description><subject>Algorithms</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computer Science</subject><subject>Effectiveness</subject><subject>Lightweight</subject><subject>Machines</subject><subject>Manufacturing</subject><subject>Markov processes</subject><subject>Mechanical Engineering</subject><subject>Neural networks</subject><subject>Processes</subject><subject>Redundancy</subject><issn>0924-669X</issn><issn>1573-7497</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>M0C</sourceid><recordid>eNp9kEtLxDAUhYMoOI7-AVcF19Gbps1jKeMTBt0ouAtpmgwdO8mYtEr_vZ2p4s7VOXC_c7gchM4JXBIAfpUIFEJiyCmGoiQUiwM0IyWnmBeSH6IZyLzAjMm3Y3SS0hoAKAUyQ083g9ebxmQmeG9N1wSfbWPvG7_KXIhZbX2y7fB7tvXOfYa235G6zbzt4166rxDf0yk6crpN9uxH5-j17vZl8YCXz_ePi-slNpTIDjNgBWdVLYCVzjpwTjqjCYNS18bpypaOSQ2VYbwSuWFGaEOBsdGPhlE6RxdT7zaGj96mTq1DH8eHkspFKUCUXMJI5RNlYkgpWqe2sdnoOCgCarebmnZT425qv5sSY4hOoTTCfmXjX_U_qW9DYnKT</recordid><startdate>20230801</startdate><enddate>20230801</enddate><creator>Hu, Xinyi</creator><creator>Fang, Hangxiang</creator><creator>Zhang, Ling</creator><creator>Zhang, Xue</creator><creator>Yang, Howard H.</creator><creator>Yang, Dongxiao</creator><creator>Peng, Bo</creator><creator>Li, Zheyang</creator><creator>Hu, Haoji</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-6048-6549</orcidid></search><sort><creationdate>20230801</creationdate><title>Dynamic connection pruning for densely connected convolutional neural networks</title><author>Hu, Xinyi ; Fang, Hangxiang ; Zhang, Ling ; Zhang, Xue ; Yang, Howard H. ; Yang, Dongxiao ; Peng, Bo ; Li, Zheyang ; Hu, Haoji</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computer Science</topic><topic>Effectiveness</topic><topic>Lightweight</topic><topic>Machines</topic><topic>Manufacturing</topic><topic>Markov processes</topic><topic>Mechanical Engineering</topic><topic>Neural networks</topic><topic>Processes</topic><topic>Redundancy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hu, Xinyi</creatorcontrib><creatorcontrib>Fang, Hangxiang</creatorcontrib><creatorcontrib>Zhang, Ling</creatorcontrib><creatorcontrib>Zhang, Xue</creatorcontrib><creatorcontrib>Yang, Howard H.</creatorcontrib><creatorcontrib>Yang, Dongxiao</creatorcontrib><creatorcontrib>Peng, Bo</creatorcontrib><creatorcontrib>Li, Zheyang</creatorcontrib><creatorcontrib>Hu, Haoji</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest One Psychology</collection><collection>Engineering Collection</collection><collection>ProQuest Central Basic</collection><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Xinyi</au><au>Fang, Hangxiang</au><au>Zhang, Ling</au><au>Zhang, Xue</au><au>Yang, Howard H.</au><au>Yang, Dongxiao</au><au>Peng, Bo</au><au>Li, Zheyang</au><au>Hu, Haoji</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic connection pruning for densely connected convolutional neural networks</atitle><jtitle>Applied intelligence (Dordrecht, Netherlands)</jtitle><stitle>Appl Intell</stitle><date>2023-08-01</date><risdate>2023</risdate><volume>53</volume><issue>16</issue><spage>19505</spage><epage>19521</epage><pages>19505-19521</pages><issn>0924-669X</issn><eissn>1573-7497</eissn><abstract>Densely connected convolutional neural networks dominate in a variety of downstream tasks due to their extraordinary performance. However, such networks typically require excessive computing resources, which hinders their deployment on mobile devices. In this paper, we propose a dynamic connection pruning algorithm, which is a cost-effective method to eliminate a large amount of redundancy in densely connected networks. First, we propose a Sample-Evaluation process to assess the contributions of connections. Specifically, sub-networks are sampled from the unpruned network in each epoch, while the parameters of the unpruned network are subsequently updated and the contributions of the connections are evaluated based on the performance of the sub-networks. Connections with low contribution will be pruned first. Then, we search for the distribution of pruning ratios by the Markov process. Finally, we prune the network based on the connection contribution and pruning ratios learned in the above two stages and obtain a lightweight network. The effectiveness of our method is verified on both high-level and low-level tasks. On the CIFAR-10 dataset, the top-1 accuracy barely drops (-0.03%) when FLOPs are reduced by 46.8%. In the super-resolution task, our model remarkably outperforms other lightweight networks in both visual and quantitative experiments. These results verify the effectiveness and generality of our proposed method.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10489-023-04513-8</doi><tpages>17</tpages><orcidid>https://orcid.org/0000-0001-6048-6549</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0924-669X
ispartof Applied intelligence (Dordrecht, Netherlands), 2023-08, Vol.53 (16), p.19505-19521
issn 0924-669X
1573-7497
language eng
recordid cdi_proquest_journals_2858085790
source ABI/INFORM Global; Springer Link
subjects Algorithms
Artificial Intelligence
Artificial neural networks
Computer Science
Effectiveness
Lightweight
Machines
Manufacturing
Markov processes
Mechanical Engineering
Neural networks
Processes
Redundancy
title Dynamic connection pruning for densely connected convolutional neural networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T20%3A47%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20connection%20pruning%20for%20densely%20connected%20convolutional%20neural%20networks&rft.jtitle=Applied%20intelligence%20(Dordrecht,%20Netherlands)&rft.au=Hu,%20Xinyi&rft.date=2023-08-01&rft.volume=53&rft.issue=16&rft.spage=19505&rft.epage=19521&rft.pages=19505-19521&rft.issn=0924-669X&rft.eissn=1573-7497&rft_id=info:doi/10.1007/s10489-023-04513-8&rft_dat=%3Cproquest_cross%3E2858085790%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c319t-606476bd8065fef0ff9fca1605adcfabe5f69a0bc67b82c6c8ac306682cac3633%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2858085790&rft_id=info:pmid/&rfr_iscdi=true