Loading…

Compressing by Learning in a Low-Rank and Sparse Decomposition Form

Low-rankness and sparsity are often used to guide the compression of convolutional neural networks (CNNs) separately. Since they capture global and local structure of a matrix respectively, we combine these two complementary properties together to pursue better network compression performance. Most...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2019, Vol.7, p.150823-150832
Main Authors: Guo, Kailing, Xie, Xiaona, Xu, Xiangmin, Xing, Xiaofen
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Low-rankness and sparsity are often used to guide the compression of convolutional neural networks (CNNs) separately. Since they capture global and local structure of a matrix respectively, we combine these two complementary properties together to pursue better network compression performance. Most existing low-rank or sparse compression methods compress the networks by approximating pre-trained models. However, the optimal solutions to pre-trained models may not be optimal to compressed networks with low-rank or sparse constraints. In this paper, we propose a low-rank and sparse learning framework that trains the compressed network from scratch. Our compressing process can be described as the following three stages. (a) In the structure designing stage, we decompose a weight matrix into sum of low-rank matrix and sparse matrix, and then the low-rank matrix is further factorized into product of two small matrices. (b) In training stage, we add â„“ 1 regularization to the loss function to force the sparse matrix to be sparse. (c) In the post-processing stage, we remove the unimportant connection of sparse matrix according to its energy distribution. The pruning process in the post-processing stage reserves most of capacity of the network and keeps the performance of the network to a great extent. The performance can be further improved with fine-tuning, along with sparse masked convolution. Experiments on several common datasets demonstrate our model is superior to other network compression methods based on low-rankness or sparsity. On CIFAR-10, our method compresses VGGNet-19 to 3.14% and PreActResNet-56 to 29.78% without accuracy drop. 62.43% of parameters of ResNet-50 are reduced with 0.55% top-5 accuracy loss on ImageNet.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2947846