Loading…

Pico-Programmable Neurons to Reduce Computations for Deep Neural Network Accelerators

Deep neural networks (DNNs) have shown impressive success in various fields. As a response to the ever-growing precision demand of DNN applications, more complex computational models are created. The growing computational volume has become a challenge for the power and performance efficiency of DNN...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on very large scale integration (VLSI) systems 2024-07, Vol.32 (7), p.1216-1227
Main Authors: Nahvy, Alireza, Navabi, Zainalabedin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep neural networks (DNNs) have shown impressive success in various fields. As a response to the ever-growing precision demand of DNN applications, more complex computational models are created. The growing computational volume has become a challenge for the power and performance efficiency of DNN accelerators. This article presents a new neural architecture to prevent ineffective and redundant computations by using neurons with memory that have decision-making power. In addition, another local memory is used to keep calculation history for removing redundancy by computational reuse. Sparse computing, as another feature, is supported to remove computations of not only zero weights but also zero bits of each weight. The results on conventional datasets such as IMAGENET show a computational reduction of more than 18 \times -150 \times . This scalable architecture enables 124 GOPS by using 197-mW power.
ISSN:1063-8210
1557-9999
DOI:10.1109/TVLSI.2024.3386698