Loading…

Convolution Filter Compression via Sparse Linear Combinations of Quantized Basis

Convolutional neural networks (CNNs) have achieved significant performance on various real-life tasks. However, the large number of parameters in convolutional layers requires huge storage and computation resources, making it challenging to deploy CNNs on memory-constrained embedded devices. In this...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2024-09, Vol.PP, p.1-14
Main Authors: Lan, Weichao, Cheung, Yiu-Ming, Lan, Liang, Jiang, Juyong, Hu, Zhikai
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural networks (CNNs) have achieved significant performance on various real-life tasks. However, the large number of parameters in convolutional layers requires huge storage and computation resources, making it challenging to deploy CNNs on memory-constrained embedded devices. In this article, we propose a novel compression method that generates the convolution filters in each layer using a set of learnable low-dimensional quantized filter bases. The proposed method reconstructs the convolution filters by stacking the linear combinations of these filter bases. By using quantized values in weights, the compact filters can be represented using fewer bits so that the network can be highly compressed. Furthermore, we explore the sparsity of coefficients through L_1 -ball projection when conducting linear combination to further reduce the storage consumption and prevent overfitting. We also provide a detailed analysis of the compression performance of the proposed method. Evaluations of image classification and object detection tasks using various network structures demonstrate that the proposed method achieves a higher compression ratio with comparable accuracy compared with the existing state-of-the-art filter decomposition and network quantization methods.
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2024.3457943