Loading…

ABCD: A Compact Object Detector Based on Channel Quantization and Tensor Decomposition

Object detection and tracking are critical computer vision tasks because of the broad needs in society; however, deep neural network-based methods cost many computational resources that hinder them from real scene applications. Quantization is a widely adopted technique to reduce the storage space a...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhang, Bingyi, Zhen, Peining, Yang, Junyan, Niu, Saisai, Yi, Hang, Chen, Hai-Bao
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Object detection and tracking are critical computer vision tasks because of the broad needs in society; however, deep neural network-based methods cost many computational resources that hinder them from real scene applications. Quantization is a widely adopted technique to reduce the storage space and memory footprint which makes deep learning models more energy-efficient and resource-friendly. Traditional network quantization methods directly quantize neural networks layer-wise, which means the parameters in different channels take the same quantization range. In this paper, we propose a low-bit learning method for convolutional neural network object detector quantization. Different from previous methods, we quantize the detector channel-wisely to avoid accuracy loss in the low-bit framework. We use progressive quantization, progressive batch normalization fusion, and cut the unnecessary long-tail weights and activations to reduce quantization loss. Moreover, based on the object detector and long short-term memory network (LSTM), we develop a high-performance tracking system. We leverage the tensor decomposition to compress weights in LSTM for getting a higher compression ratio. Experiments are conducted on public datasets and our infrared aerial dataset for object detection and tracking. The experimental results show that our approach obtains better performance compared with the state-of-the-art methods in terms of accuracy and compression ratio.
ISSN:2166-6822
DOI:10.1109/ICCE-Berlin50680.2020.9352200