Loading…

Compressing convolutional neural networks with hierarchical Tucker-2 decomposition

Convolutional neural networks (CNNs) play a crucial role and achieve top results in computer vision tasks but at the cost of high computational cost and storage complexity. One way to solve this problem is the approximation of the convolution kernel using tensor decomposition methods. In this way, t...

Full description

Saved in:
Bibliographic Details
Published in:Applied soft computing 2023-01, Vol.132, p.109856, Article 109856
Main Authors: Gabor, Mateusz, Zdunek, Rafał
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593
cites cdi_FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593
container_end_page
container_issue
container_start_page 109856
container_title Applied soft computing
container_volume 132
creator Gabor, Mateusz
Zdunek, Rafał
description Convolutional neural networks (CNNs) play a crucial role and achieve top results in computer vision tasks but at the cost of high computational cost and storage complexity. One way to solve this problem is the approximation of the convolution kernel using tensor decomposition methods. In this way, the original kernel is replaced with a sequence of kernels in a lower-dimensional space. This study proposes a novel CNN compression technique based on the hierarchical Tucker-2 (HT-2) tensor decomposition and makes an important contribution to the field of neural network compression based on low-rank approximations. We demonstrate the effectiveness of our approach on many CNN architectures on CIFAR-10 and ImageNet datasets. The obtained results show a significant reduction in parameters and FLOPS with a minor drop in classification accuracy. Compared to different state-of-the-art compression methods, including pruning and matrix/tensor decomposition, the HT-2, as a new alternative, outperforms most of the cited methods. The implementation of the proposed approach is very straightforward and can be easily coded in every deep learning library. •Hierarchical Tucker-2 decomposition is applied to compress CNNs.•The proposed method was compared with the state-of-the-art CNN compression methods.•Substantial parameter and flops compression is obtained at marginal accuracy drop.•New notation for the Kruskal convolution is introduced.
doi_str_mv 10.1016/j.asoc.2022.109856
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_asoc_2022_109856</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S156849462200905X</els_id><sourcerecordid>S156849462200905X</sourcerecordid><originalsourceid>FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593</originalsourceid><addsrcrecordid>eNp9kFFLwzAUhYMoOKd_wKf-gc4kbdIEfJGhThgIMp9DepPadFszknbDf2_qfPbpXO7lO5x7ELoneEEw4Q_dQkcPC4opTQspGL9AMyIqmksuyGWaGRd5KUt-jW5i7HCCJBUz9LH0-0OwMbr-KwPfH_1uHJzv9S7r7Rh-ZTj5sI3ZyQ1t1jobdIDWQTptRtjakNPMWEg2ProJvUVXjd5Fe_enc_T58rxZrvL1--vb8mmdQ4HxkAtGU4SaYMMJ04xZCmWjNZQCCAcjm8oUBW5oXRNqCsJSXApCNFySShdMFnNEz74QfIzBNuoQ3F6Hb0WwmlpRnZpaUVMr6txKgh7PkE3JjukZFcHZHqxxwcKgjHf_4T8pA2yh</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Compressing convolutional neural networks with hierarchical Tucker-2 decomposition</title><source>Elsevier</source><creator>Gabor, Mateusz ; Zdunek, Rafał</creator><creatorcontrib>Gabor, Mateusz ; Zdunek, Rafał</creatorcontrib><description>Convolutional neural networks (CNNs) play a crucial role and achieve top results in computer vision tasks but at the cost of high computational cost and storage complexity. One way to solve this problem is the approximation of the convolution kernel using tensor decomposition methods. In this way, the original kernel is replaced with a sequence of kernels in a lower-dimensional space. This study proposes a novel CNN compression technique based on the hierarchical Tucker-2 (HT-2) tensor decomposition and makes an important contribution to the field of neural network compression based on low-rank approximations. We demonstrate the effectiveness of our approach on many CNN architectures on CIFAR-10 and ImageNet datasets. The obtained results show a significant reduction in parameters and FLOPS with a minor drop in classification accuracy. Compared to different state-of-the-art compression methods, including pruning and matrix/tensor decomposition, the HT-2, as a new alternative, outperforms most of the cited methods. The implementation of the proposed approach is very straightforward and can be easily coded in every deep learning library. •Hierarchical Tucker-2 decomposition is applied to compress CNNs.•The proposed method was compared with the state-of-the-art CNN compression methods.•Substantial parameter and flops compression is obtained at marginal accuracy drop.•New notation for the Kruskal convolution is introduced.</description><identifier>ISSN: 1568-4946</identifier><identifier>EISSN: 1872-9681</identifier><identifier>DOI: 10.1016/j.asoc.2022.109856</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Convolutional neural networks ; Hierarchical Tucker decomposition ; Tensor decomposition</subject><ispartof>Applied soft computing, 2023-01, Vol.132, p.109856, Article 109856</ispartof><rights>2022 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593</citedby><cites>FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593</cites><orcidid>0000-0002-5397-0655</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Gabor, Mateusz</creatorcontrib><creatorcontrib>Zdunek, Rafał</creatorcontrib><title>Compressing convolutional neural networks with hierarchical Tucker-2 decomposition</title><title>Applied soft computing</title><description>Convolutional neural networks (CNNs) play a crucial role and achieve top results in computer vision tasks but at the cost of high computational cost and storage complexity. One way to solve this problem is the approximation of the convolution kernel using tensor decomposition methods. In this way, the original kernel is replaced with a sequence of kernels in a lower-dimensional space. This study proposes a novel CNN compression technique based on the hierarchical Tucker-2 (HT-2) tensor decomposition and makes an important contribution to the field of neural network compression based on low-rank approximations. We demonstrate the effectiveness of our approach on many CNN architectures on CIFAR-10 and ImageNet datasets. The obtained results show a significant reduction in parameters and FLOPS with a minor drop in classification accuracy. Compared to different state-of-the-art compression methods, including pruning and matrix/tensor decomposition, the HT-2, as a new alternative, outperforms most of the cited methods. The implementation of the proposed approach is very straightforward and can be easily coded in every deep learning library. •Hierarchical Tucker-2 decomposition is applied to compress CNNs.•The proposed method was compared with the state-of-the-art CNN compression methods.•Substantial parameter and flops compression is obtained at marginal accuracy drop.•New notation for the Kruskal convolution is introduced.</description><subject>Convolutional neural networks</subject><subject>Hierarchical Tucker decomposition</subject><subject>Tensor decomposition</subject><issn>1568-4946</issn><issn>1872-9681</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kFFLwzAUhYMoOKd_wKf-gc4kbdIEfJGhThgIMp9DepPadFszknbDf2_qfPbpXO7lO5x7ELoneEEw4Q_dQkcPC4opTQspGL9AMyIqmksuyGWaGRd5KUt-jW5i7HCCJBUz9LH0-0OwMbr-KwPfH_1uHJzv9S7r7Rh-ZTj5sI3ZyQ1t1jobdIDWQTptRtjakNPMWEg2ProJvUVXjd5Fe_enc_T58rxZrvL1--vb8mmdQ4HxkAtGU4SaYMMJ04xZCmWjNZQCCAcjm8oUBW5oXRNqCsJSXApCNFySShdMFnNEz74QfIzBNuoQ3F6Hb0WwmlpRnZpaUVMr6txKgh7PkE3JjukZFcHZHqxxwcKgjHf_4T8pA2yh</recordid><startdate>202301</startdate><enddate>202301</enddate><creator>Gabor, Mateusz</creator><creator>Zdunek, Rafał</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-5397-0655</orcidid></search><sort><creationdate>202301</creationdate><title>Compressing convolutional neural networks with hierarchical Tucker-2 decomposition</title><author>Gabor, Mateusz ; Zdunek, Rafał</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Convolutional neural networks</topic><topic>Hierarchical Tucker decomposition</topic><topic>Tensor decomposition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gabor, Mateusz</creatorcontrib><creatorcontrib>Zdunek, Rafał</creatorcontrib><collection>CrossRef</collection><jtitle>Applied soft computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gabor, Mateusz</au><au>Zdunek, Rafał</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Compressing convolutional neural networks with hierarchical Tucker-2 decomposition</atitle><jtitle>Applied soft computing</jtitle><date>2023-01</date><risdate>2023</risdate><volume>132</volume><spage>109856</spage><pages>109856-</pages><artnum>109856</artnum><issn>1568-4946</issn><eissn>1872-9681</eissn><abstract>Convolutional neural networks (CNNs) play a crucial role and achieve top results in computer vision tasks but at the cost of high computational cost and storage complexity. One way to solve this problem is the approximation of the convolution kernel using tensor decomposition methods. In this way, the original kernel is replaced with a sequence of kernels in a lower-dimensional space. This study proposes a novel CNN compression technique based on the hierarchical Tucker-2 (HT-2) tensor decomposition and makes an important contribution to the field of neural network compression based on low-rank approximations. We demonstrate the effectiveness of our approach on many CNN architectures on CIFAR-10 and ImageNet datasets. The obtained results show a significant reduction in parameters and FLOPS with a minor drop in classification accuracy. Compared to different state-of-the-art compression methods, including pruning and matrix/tensor decomposition, the HT-2, as a new alternative, outperforms most of the cited methods. The implementation of the proposed approach is very straightforward and can be easily coded in every deep learning library. •Hierarchical Tucker-2 decomposition is applied to compress CNNs.•The proposed method was compared with the state-of-the-art CNN compression methods.•Substantial parameter and flops compression is obtained at marginal accuracy drop.•New notation for the Kruskal convolution is introduced.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.asoc.2022.109856</doi><orcidid>https://orcid.org/0000-0002-5397-0655</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1568-4946
ispartof Applied soft computing, 2023-01, Vol.132, p.109856, Article 109856
issn 1568-4946
1872-9681
language eng
recordid cdi_crossref_primary_10_1016_j_asoc_2022_109856
source Elsevier
subjects Convolutional neural networks
Hierarchical Tucker decomposition
Tensor decomposition
title Compressing convolutional neural networks with hierarchical Tucker-2 decomposition
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T09%3A32%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Compressing%20convolutional%20neural%20networks%20with%20hierarchical%20Tucker-2%20decomposition&rft.jtitle=Applied%20soft%20computing&rft.au=Gabor,%20Mateusz&rft.date=2023-01&rft.volume=132&rft.spage=109856&rft.pages=109856-&rft.artnum=109856&rft.issn=1568-4946&rft.eissn=1872-9681&rft_id=info:doi/10.1016/j.asoc.2022.109856&rft_dat=%3Celsevier_cross%3ES156849462200905X%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c300t-852692b10d615a55e2c4faac48c16cd9f7d330f2bb12d3159282c88f6917a3593%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true