Loading…

Self-Fusion Convolutional Neural Networks

•We propose a novel lightweight feature self-fusion convolutional module.•We provide a comprehensive comparison with inverted bottleneck.•A complete network can be obtained by simply stacking modules.•Experimental results demonstrate the effectiveness of our self-fusion convolutional module. Efficie...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2021-12, Vol.152, p.50-55
Main Authors: Gong, Shenjian, Zhang, Shanshan, Yang, Jian, Yuen, Pong Chi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163
cites cdi_FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163
container_end_page 55
container_issue
container_start_page 50
container_title Pattern recognition letters
container_volume 152
creator Gong, Shenjian
Zhang, Shanshan
Yang, Jian
Yuen, Pong Chi
description •We propose a novel lightweight feature self-fusion convolutional module.•We provide a comprehensive comparison with inverted bottleneck.•A complete network can be obtained by simply stacking modules.•Experimental results demonstrate the effectiveness of our self-fusion convolutional module. Efficiency is an important concern for practical applications, therefore, it is of great importance to build effective lightweight networks. This paper proposes a novel lightweight feature self-fusion convolutional (SFC) module, which consists of self-fusion and point-wise convolution. The core of SFC is a three-step self-fusion. First, each input feature map is expanded to a high dimensional space individually, prohibiting connections with other input channels. Then, in the second step, we fuse all features from the same input in the high dimensional space to enhance the representation ability. Finally, we compress high dimensional features to a low dimensional space. After self-fusion, we connect all features by one point-wise convolution. Compared to inverted bottleneck, SFC module decreases the number of parameters by replacing the dense connections among channels with self-fusion. To the best of our knowledge, SFC is the first method to build lightweight networks by feature self-fusion. We then build a new network namely SFC-Net, by stacking SFC modules. Experimental results on the CIFAR and downsampled ImageNet datasets demonstrate our SFC-Net achieves better performance to some previous popular CNNs with fewer number of parameters and achieves comparable performance compared to other previous lightweight architectures. The code is available at https://github.com/Yankeegsj/Self-fusion.
doi_str_mv 10.1016/j.patrec.2021.08.022
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2623456091</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0167865521003160</els_id><sourcerecordid>2623456091</sourcerecordid><originalsourceid>FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163</originalsourceid><addsrcrecordid>eNp9kE9LxDAQxYMouK5-Aw8Lnjy0Tv40TS-CFFeFRQ_qOaTtBFrrpibtit9-s9azpzcwv_eYeYRcUkgpUHnTpYMZPdYpA0ZTUCkwdkQWVOUsybkQx2QRsTxRMstOyVkIHQBIXqgFuX7F3ibrKbRuuyrdduf6aYyz6VfPOPlfGb-d_wjn5MSaPuDFny7J-_r-rXxMNi8PT-XdJqk5F2NSgLLccF4VNhMAQkkJKJtcgkXLGomM5pAVFqBQmFVGxH3FramqijaGSr4kV3Pu4N3XhGHUnZt8PChoJhkXmYSCRkrMVO1dCB6tHnz7afyPpqAPpehOz6XoQykalI6lRNvtbMP4wa5Fr0Pd4rbGpo3oqBvX_h-wB46uawk</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2623456091</pqid></control><display><type>article</type><title>Self-Fusion Convolutional Neural Networks</title><source>ScienceDirect Freedom Collection</source><creator>Gong, Shenjian ; Zhang, Shanshan ; Yang, Jian ; Yuen, Pong Chi</creator><creatorcontrib>Gong, Shenjian ; Zhang, Shanshan ; Yang, Jian ; Yuen, Pong Chi</creatorcontrib><description>•We propose a novel lightweight feature self-fusion convolutional module.•We provide a comprehensive comparison with inverted bottleneck.•A complete network can be obtained by simply stacking modules.•Experimental results demonstrate the effectiveness of our self-fusion convolutional module. Efficiency is an important concern for practical applications, therefore, it is of great importance to build effective lightweight networks. This paper proposes a novel lightweight feature self-fusion convolutional (SFC) module, which consists of self-fusion and point-wise convolution. The core of SFC is a three-step self-fusion. First, each input feature map is expanded to a high dimensional space individually, prohibiting connections with other input channels. Then, in the second step, we fuse all features from the same input in the high dimensional space to enhance the representation ability. Finally, we compress high dimensional features to a low dimensional space. After self-fusion, we connect all features by one point-wise convolution. Compared to inverted bottleneck, SFC module decreases the number of parameters by replacing the dense connections among channels with self-fusion. To the best of our knowledge, SFC is the first method to build lightweight networks by feature self-fusion. We then build a new network namely SFC-Net, by stacking SFC modules. Experimental results on the CIFAR and downsampled ImageNet datasets demonstrate our SFC-Net achieves better performance to some previous popular CNNs with fewer number of parameters and achieves comparable performance compared to other previous lightweight architectures. The code is available at https://github.com/Yankeegsj/Self-fusion.</description><identifier>ISSN: 0167-8655</identifier><identifier>EISSN: 1872-7344</identifier><identifier>DOI: 10.1016/j.patrec.2021.08.022</identifier><language>eng</language><publisher>Amsterdam: Elsevier B.V</publisher><subject>Artificial neural networks ; Channels ; Efficient feature fusion ; Feature maps ; Image classification ; Lightweight ; Lightweight neural networks ; Modules ; Neural networks ; Parameters</subject><ispartof>Pattern recognition letters, 2021-12, Vol.152, p.50-55</ispartof><rights>2021</rights><rights>Copyright Elsevier Science Ltd. Dec 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163</citedby><cites>FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Gong, Shenjian</creatorcontrib><creatorcontrib>Zhang, Shanshan</creatorcontrib><creatorcontrib>Yang, Jian</creatorcontrib><creatorcontrib>Yuen, Pong Chi</creatorcontrib><title>Self-Fusion Convolutional Neural Networks</title><title>Pattern recognition letters</title><description>•We propose a novel lightweight feature self-fusion convolutional module.•We provide a comprehensive comparison with inverted bottleneck.•A complete network can be obtained by simply stacking modules.•Experimental results demonstrate the effectiveness of our self-fusion convolutional module. Efficiency is an important concern for practical applications, therefore, it is of great importance to build effective lightweight networks. This paper proposes a novel lightweight feature self-fusion convolutional (SFC) module, which consists of self-fusion and point-wise convolution. The core of SFC is a three-step self-fusion. First, each input feature map is expanded to a high dimensional space individually, prohibiting connections with other input channels. Then, in the second step, we fuse all features from the same input in the high dimensional space to enhance the representation ability. Finally, we compress high dimensional features to a low dimensional space. After self-fusion, we connect all features by one point-wise convolution. Compared to inverted bottleneck, SFC module decreases the number of parameters by replacing the dense connections among channels with self-fusion. To the best of our knowledge, SFC is the first method to build lightweight networks by feature self-fusion. We then build a new network namely SFC-Net, by stacking SFC modules. Experimental results on the CIFAR and downsampled ImageNet datasets demonstrate our SFC-Net achieves better performance to some previous popular CNNs with fewer number of parameters and achieves comparable performance compared to other previous lightweight architectures. The code is available at https://github.com/Yankeegsj/Self-fusion.</description><subject>Artificial neural networks</subject><subject>Channels</subject><subject>Efficient feature fusion</subject><subject>Feature maps</subject><subject>Image classification</subject><subject>Lightweight</subject><subject>Lightweight neural networks</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Parameters</subject><issn>0167-8655</issn><issn>1872-7344</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LxDAQxYMouK5-Aw8Lnjy0Tv40TS-CFFeFRQ_qOaTtBFrrpibtit9-s9azpzcwv_eYeYRcUkgpUHnTpYMZPdYpA0ZTUCkwdkQWVOUsybkQx2QRsTxRMstOyVkIHQBIXqgFuX7F3ibrKbRuuyrdduf6aYyz6VfPOPlfGb-d_wjn5MSaPuDFny7J-_r-rXxMNi8PT-XdJqk5F2NSgLLccF4VNhMAQkkJKJtcgkXLGomM5pAVFqBQmFVGxH3FramqijaGSr4kV3Pu4N3XhGHUnZt8PChoJhkXmYSCRkrMVO1dCB6tHnz7afyPpqAPpehOz6XoQykalI6lRNvtbMP4wa5Fr0Pd4rbGpo3oqBvX_h-wB46uawk</recordid><startdate>202112</startdate><enddate>202112</enddate><creator>Gong, Shenjian</creator><creator>Zhang, Shanshan</creator><creator>Yang, Jian</creator><creator>Yuen, Pong Chi</creator><general>Elsevier B.V</general><general>Elsevier Science Ltd</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7TK</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>202112</creationdate><title>Self-Fusion Convolutional Neural Networks</title><author>Gong, Shenjian ; Zhang, Shanshan ; Yang, Jian ; Yuen, Pong Chi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Channels</topic><topic>Efficient feature fusion</topic><topic>Feature maps</topic><topic>Image classification</topic><topic>Lightweight</topic><topic>Lightweight neural networks</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Parameters</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gong, Shenjian</creatorcontrib><creatorcontrib>Zhang, Shanshan</creatorcontrib><creatorcontrib>Yang, Jian</creatorcontrib><creatorcontrib>Yuen, Pong Chi</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Pattern recognition letters</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gong, Shenjian</au><au>Zhang, Shanshan</au><au>Yang, Jian</au><au>Yuen, Pong Chi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Self-Fusion Convolutional Neural Networks</atitle><jtitle>Pattern recognition letters</jtitle><date>2021-12</date><risdate>2021</risdate><volume>152</volume><spage>50</spage><epage>55</epage><pages>50-55</pages><issn>0167-8655</issn><eissn>1872-7344</eissn><abstract>•We propose a novel lightweight feature self-fusion convolutional module.•We provide a comprehensive comparison with inverted bottleneck.•A complete network can be obtained by simply stacking modules.•Experimental results demonstrate the effectiveness of our self-fusion convolutional module. Efficiency is an important concern for practical applications, therefore, it is of great importance to build effective lightweight networks. This paper proposes a novel lightweight feature self-fusion convolutional (SFC) module, which consists of self-fusion and point-wise convolution. The core of SFC is a three-step self-fusion. First, each input feature map is expanded to a high dimensional space individually, prohibiting connections with other input channels. Then, in the second step, we fuse all features from the same input in the high dimensional space to enhance the representation ability. Finally, we compress high dimensional features to a low dimensional space. After self-fusion, we connect all features by one point-wise convolution. Compared to inverted bottleneck, SFC module decreases the number of parameters by replacing the dense connections among channels with self-fusion. To the best of our knowledge, SFC is the first method to build lightweight networks by feature self-fusion. We then build a new network namely SFC-Net, by stacking SFC modules. Experimental results on the CIFAR and downsampled ImageNet datasets demonstrate our SFC-Net achieves better performance to some previous popular CNNs with fewer number of parameters and achieves comparable performance compared to other previous lightweight architectures. The code is available at https://github.com/Yankeegsj/Self-fusion.</abstract><cop>Amsterdam</cop><pub>Elsevier B.V</pub><doi>10.1016/j.patrec.2021.08.022</doi><tpages>6</tpages></addata></record>
fulltext fulltext
identifier ISSN: 0167-8655
ispartof Pattern recognition letters, 2021-12, Vol.152, p.50-55
issn 0167-8655
1872-7344
language eng
recordid cdi_proquest_journals_2623456091
source ScienceDirect Freedom Collection
subjects Artificial neural networks
Channels
Efficient feature fusion
Feature maps
Image classification
Lightweight
Lightweight neural networks
Modules
Neural networks
Parameters
title Self-Fusion Convolutional Neural Networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T16%3A29%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Self-Fusion%20Convolutional%20Neural%20Networks&rft.jtitle=Pattern%20recognition%20letters&rft.au=Gong,%20Shenjian&rft.date=2021-12&rft.volume=152&rft.spage=50&rft.epage=55&rft.pages=50-55&rft.issn=0167-8655&rft.eissn=1872-7344&rft_id=info:doi/10.1016/j.patrec.2021.08.022&rft_dat=%3Cproquest_cross%3E2623456091%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c334t-908f3a33b9f540048660e6d760fef2d6e217059f0098e5ba4660b3fabbb1da163%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2623456091&rft_id=info:pmid/&rfr_iscdi=true