Loading…

Dictionary-enabled efficient training of ConvNets for image classification

Convolutional networks (ConvNets) are computationally expensive but well known for their performance on image data. One way to reduce their complexity is to explore inherited data sparsity. However, since the gradients involved in ConvNets require dynamic updates, applying data sparsity in the train...

Full description

Saved in:
Bibliographic Details
Published in:Image and vision computing 2023-07, Vol.135, p.104718, Article 104718
Main Authors: Haider, Usman, Hanif, Muhammad, Rashid, Ahmar, Hussain, Syed Fawad
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3
cites cdi_FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3
container_end_page
container_issue
container_start_page 104718
container_title Image and vision computing
container_volume 135
creator Haider, Usman
Hanif, Muhammad
Rashid, Ahmar
Hussain, Syed Fawad
description Convolutional networks (ConvNets) are computationally expensive but well known for their performance on image data. One way to reduce their complexity is to explore inherited data sparsity. However, since the gradients involved in ConvNets require dynamic updates, applying data sparsity in the training step is not straightforward. Dictionary-based learning methods can be useful since they encode the original data in a sparse form. This paper proposes a new dictionary-based training paradigm for ConvNets by exploiting redundancy in the training data while keeping the distinctive features intact. The ConvNet is then trained on the reduced, sparse dataset. The new approach significantly reduces the training time without compromising accuracy. To the best of our knowledge, this is the first implementation of ConvNet on dictionary-based sparse training data. The proposed method is validated on three publicly available datasets –MNIST, USPS, and MNIST FASHION. The experimental results show a significant reduction of 4.5 times in the overall computational burden of vanilla ConvNet for all the datasets. Whereas the accuracy is intact at 97.21% for MNIST, 96.81% for USPS, and 88.4% for FASHION datasets. These results are comparable to state-of-the-art algorithms, such as ResNet-{18,34,50}, trained on the full training dataset. •ConvNets are powerful for images but computationally expensive.•Paper proposes dictionary-based training for ConvNets exploiting sparsity.•The proposed method preserves orthogonal features and reduces training time.•Results: 4.5x reduction in computational burden.•High accuracy: 97.21% MNIST, 96.81% USPS, 88.4% FASHION.
doi_str_mv 10.1016/j.imavis.2023.104718
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_imavis_2023_104718</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0262885623000926</els_id><sourcerecordid>S0262885623000926</sourcerecordid><originalsourceid>FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3</originalsourceid><addsrcrecordid>eNp9kE9LxDAQxXNQcF39Bh7yBVonSdtkL4LU_yx60XPIpjNLSm0lKYX99rbUs6eBYd6b936M3QjIBYjqts3Dt5tCyiVINa8KLcwZ24CsZGZMWV2wy5RaANCgdxv29hD8GIbexVOGvTt02HAkCj5gP_IxutCH_sgH4vXQT-84Jk5D5POPI3LfuZTCfOwWiyt2Tq5LeP03t-zr6fGzfsn2H8-v9f0-8wqqMatAkzO7UnnnNQrllZEOtFGIJfkSNZXGiIqUaCQYT6YgrRoqnVAA0h3UlhWrr49DShHJ_sQ5TzxZAXZhYFu7MrALA7symGV3qwznbFPAaNNS0mMTIvrRNkP43-AX09hqEQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dictionary-enabled efficient training of ConvNets for image classification</title><source>ScienceDirect Freedom Collection</source><creator>Haider, Usman ; Hanif, Muhammad ; Rashid, Ahmar ; Hussain, Syed Fawad</creator><creatorcontrib>Haider, Usman ; Hanif, Muhammad ; Rashid, Ahmar ; Hussain, Syed Fawad</creatorcontrib><description>Convolutional networks (ConvNets) are computationally expensive but well known for their performance on image data. One way to reduce their complexity is to explore inherited data sparsity. However, since the gradients involved in ConvNets require dynamic updates, applying data sparsity in the training step is not straightforward. Dictionary-based learning methods can be useful since they encode the original data in a sparse form. This paper proposes a new dictionary-based training paradigm for ConvNets by exploiting redundancy in the training data while keeping the distinctive features intact. The ConvNet is then trained on the reduced, sparse dataset. The new approach significantly reduces the training time without compromising accuracy. To the best of our knowledge, this is the first implementation of ConvNet on dictionary-based sparse training data. The proposed method is validated on three publicly available datasets –MNIST, USPS, and MNIST FASHION. The experimental results show a significant reduction of 4.5 times in the overall computational burden of vanilla ConvNet for all the datasets. Whereas the accuracy is intact at 97.21% for MNIST, 96.81% for USPS, and 88.4% for FASHION datasets. These results are comparable to state-of-the-art algorithms, such as ResNet-{18,34,50}, trained on the full training dataset. •ConvNets are powerful for images but computationally expensive.•Paper proposes dictionary-based training for ConvNets exploiting sparsity.•The proposed method preserves orthogonal features and reduces training time.•Results: 4.5x reduction in computational burden.•High accuracy: 97.21% MNIST, 96.81% USPS, 88.4% FASHION.</description><identifier>ISSN: 0262-8856</identifier><identifier>DOI: 10.1016/j.imavis.2023.104718</identifier><language>eng</language><publisher>Elsevier B.V</publisher><subject>Convolution neural networks ; Deep learning ; Dictionary learning ; Image classification ; Sparse representation</subject><ispartof>Image and vision computing, 2023-07, Vol.135, p.104718, Article 104718</ispartof><rights>2023 Elsevier B.V.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3</citedby><cites>FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Haider, Usman</creatorcontrib><creatorcontrib>Hanif, Muhammad</creatorcontrib><creatorcontrib>Rashid, Ahmar</creatorcontrib><creatorcontrib>Hussain, Syed Fawad</creatorcontrib><title>Dictionary-enabled efficient training of ConvNets for image classification</title><title>Image and vision computing</title><description>Convolutional networks (ConvNets) are computationally expensive but well known for their performance on image data. One way to reduce their complexity is to explore inherited data sparsity. However, since the gradients involved in ConvNets require dynamic updates, applying data sparsity in the training step is not straightforward. Dictionary-based learning methods can be useful since they encode the original data in a sparse form. This paper proposes a new dictionary-based training paradigm for ConvNets by exploiting redundancy in the training data while keeping the distinctive features intact. The ConvNet is then trained on the reduced, sparse dataset. The new approach significantly reduces the training time without compromising accuracy. To the best of our knowledge, this is the first implementation of ConvNet on dictionary-based sparse training data. The proposed method is validated on three publicly available datasets –MNIST, USPS, and MNIST FASHION. The experimental results show a significant reduction of 4.5 times in the overall computational burden of vanilla ConvNet for all the datasets. Whereas the accuracy is intact at 97.21% for MNIST, 96.81% for USPS, and 88.4% for FASHION datasets. These results are comparable to state-of-the-art algorithms, such as ResNet-{18,34,50}, trained on the full training dataset. •ConvNets are powerful for images but computationally expensive.•Paper proposes dictionary-based training for ConvNets exploiting sparsity.•The proposed method preserves orthogonal features and reduces training time.•Results: 4.5x reduction in computational burden.•High accuracy: 97.21% MNIST, 96.81% USPS, 88.4% FASHION.</description><subject>Convolution neural networks</subject><subject>Deep learning</subject><subject>Dictionary learning</subject><subject>Image classification</subject><subject>Sparse representation</subject><issn>0262-8856</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE9LxDAQxXNQcF39Bh7yBVonSdtkL4LU_yx60XPIpjNLSm0lKYX99rbUs6eBYd6b936M3QjIBYjqts3Dt5tCyiVINa8KLcwZ24CsZGZMWV2wy5RaANCgdxv29hD8GIbexVOGvTt02HAkCj5gP_IxutCH_sgH4vXQT-84Jk5D5POPI3LfuZTCfOwWiyt2Tq5LeP03t-zr6fGzfsn2H8-v9f0-8wqqMatAkzO7UnnnNQrllZEOtFGIJfkSNZXGiIqUaCQYT6YgrRoqnVAA0h3UlhWrr49DShHJ_sQ5TzxZAXZhYFu7MrALA7symGV3qwznbFPAaNNS0mMTIvrRNkP43-AX09hqEQ</recordid><startdate>202307</startdate><enddate>202307</enddate><creator>Haider, Usman</creator><creator>Hanif, Muhammad</creator><creator>Rashid, Ahmar</creator><creator>Hussain, Syed Fawad</creator><general>Elsevier B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202307</creationdate><title>Dictionary-enabled efficient training of ConvNets for image classification</title><author>Haider, Usman ; Hanif, Muhammad ; Rashid, Ahmar ; Hussain, Syed Fawad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Convolution neural networks</topic><topic>Deep learning</topic><topic>Dictionary learning</topic><topic>Image classification</topic><topic>Sparse representation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Haider, Usman</creatorcontrib><creatorcontrib>Hanif, Muhammad</creatorcontrib><creatorcontrib>Rashid, Ahmar</creatorcontrib><creatorcontrib>Hussain, Syed Fawad</creatorcontrib><collection>CrossRef</collection><jtitle>Image and vision computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Haider, Usman</au><au>Hanif, Muhammad</au><au>Rashid, Ahmar</au><au>Hussain, Syed Fawad</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dictionary-enabled efficient training of ConvNets for image classification</atitle><jtitle>Image and vision computing</jtitle><date>2023-07</date><risdate>2023</risdate><volume>135</volume><spage>104718</spage><pages>104718-</pages><artnum>104718</artnum><issn>0262-8856</issn><abstract>Convolutional networks (ConvNets) are computationally expensive but well known for their performance on image data. One way to reduce their complexity is to explore inherited data sparsity. However, since the gradients involved in ConvNets require dynamic updates, applying data sparsity in the training step is not straightforward. Dictionary-based learning methods can be useful since they encode the original data in a sparse form. This paper proposes a new dictionary-based training paradigm for ConvNets by exploiting redundancy in the training data while keeping the distinctive features intact. The ConvNet is then trained on the reduced, sparse dataset. The new approach significantly reduces the training time without compromising accuracy. To the best of our knowledge, this is the first implementation of ConvNet on dictionary-based sparse training data. The proposed method is validated on three publicly available datasets –MNIST, USPS, and MNIST FASHION. The experimental results show a significant reduction of 4.5 times in the overall computational burden of vanilla ConvNet for all the datasets. Whereas the accuracy is intact at 97.21% for MNIST, 96.81% for USPS, and 88.4% for FASHION datasets. These results are comparable to state-of-the-art algorithms, such as ResNet-{18,34,50}, trained on the full training dataset. •ConvNets are powerful for images but computationally expensive.•Paper proposes dictionary-based training for ConvNets exploiting sparsity.•The proposed method preserves orthogonal features and reduces training time.•Results: 4.5x reduction in computational burden.•High accuracy: 97.21% MNIST, 96.81% USPS, 88.4% FASHION.</abstract><pub>Elsevier B.V</pub><doi>10.1016/j.imavis.2023.104718</doi></addata></record>
fulltext fulltext
identifier ISSN: 0262-8856
ispartof Image and vision computing, 2023-07, Vol.135, p.104718, Article 104718
issn 0262-8856
language eng
recordid cdi_crossref_primary_10_1016_j_imavis_2023_104718
source ScienceDirect Freedom Collection
subjects Convolution neural networks
Deep learning
Dictionary learning
Image classification
Sparse representation
title Dictionary-enabled efficient training of ConvNets for image classification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T15%3A13%3A01IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dictionary-enabled%20efficient%20training%20of%20ConvNets%20for%20image%20classification&rft.jtitle=Image%20and%20vision%20computing&rft.au=Haider,%20Usman&rft.date=2023-07&rft.volume=135&rft.spage=104718&rft.pages=104718-&rft.artnum=104718&rft.issn=0262-8856&rft_id=info:doi/10.1016/j.imavis.2023.104718&rft_dat=%3Celsevier_cross%3ES0262885623000926%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c306t-607fa8953cac7e13c382a0783ee5fc5e7f58816f31d208cf84f73df5a13002ab3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true