Loading…

ABCP: Automatic Blockwise and Channelwise Network Pruning via Joint Search

Currently, an increasing number of model pruning methods are proposed to resolve the contradictions between the computer powers required by the deep learning models and the resource-constrained devices. However, for simple tasks like robotic detection, most of the traditional rule-based network prun...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on cognitive and developmental systems 2023-09, Vol.15 (3), p.1560-1573
Main Authors: Li, Jiaqi, Li, Haoran, Chen, Yaran, Ding, Zixiang, Li, Nannan, Ma, Mingjun, Duan, Zicheng, Zhao, Dongbin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Currently, an increasing number of model pruning methods are proposed to resolve the contradictions between the computer powers required by the deep learning models and the resource-constrained devices. However, for simple tasks like robotic detection, most of the traditional rule-based network pruning methods cannot reach a sufficient compression ratio with low accuracy loss and are time consuming as well as laborious. In this article, we propose automatic blockwise and channelwise network pruning (ABCP) to jointly search the blockwise and channelwise pruning action for robotic detection by deep reinforcement learning. A joint sample algorithm is proposed to simultaneously generate the pruning choice of each residual block and the channel pruning ratio of each convolutional layer from the discrete and continuous search space, respectively. The best pruning action taking both the accuracy and the complexity of the model into account is obtained finally. Compared with the traditional rule-based pruning method, this pipeline saves human labor and achieves a higher compression ratio with lower accuracy loss. Tested on the mobile robot detection data set, the pruned YOLOv3 model saves 99.5% floating-point operations, reduces 99.5% parameters, and achieves [Formula Omitted] speed up with only 2.8% mean of average precision (mAP) loss. On the sim2real detection data set for robotic detection task, the pruned YOLOv3 model achieves 9.6% better mAP than the baseline model, showing better robustness performance.
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2022.3230858