Loading…

Gradient estimation for ultra low precision POT and additive POT quantization

Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this bur...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023-01, Vol.11, p.1-1
Main Authors: Tesfai, Huruy, Saleh, Hani, Al-Qutayri, Mahmoud, Mohammad, Baker, Stouraitis, Thanos
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning networks achieve high accuracy for many classification tasks in computer vision and natural language processing. As these models are usually over-parameterized, the computations and memory required are unsuitable for power-constrained devices. One effective technique to reduce this burden is through low-bit quantization. However, the introduced quantization error causes a drop in the classification accuracy and requires design rethinking. To benefit from the hardware-friendly power-of-two (POT) and additive POT quantization, we explore various gradient estimation methods and propose quantization error-aware gradient estimation that manoeuvres weight update to be as close to the projection steps as possible. The clipping or scaling coefficients of the quantization scheme are learned jointly with the model parameters to minimize quantization error. We also applied per-channel quantization on POT and additive POT quantized models to minimize the accuracy degradation due to the rigid resolution property of POT quantization. We show that comparable accuracy can be achieved when using the proposed gradient estimation for POT quantization, even at ultra-low bit precision.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3286299