Loading…

A Survey on Hardware Accelerator Design of Deep Learning for Edge Devices

In artificial intelligence, the large role is played by machine learning (ML) in a variety of applications. This article aims at providing a comprehensive survey on summarizing recent trends and advances in hardware accelerator design for machine learning based on various hardware platforms like ASI...

Full description

Saved in:
Bibliographic Details
Published in:Wireless personal communications 2024-08, Vol.137 (3), p.1715-1760
Main Authors: Samanta, Anu, Hatai, Indranil, Mal, Ashis Kumar
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In artificial intelligence, the large role is played by machine learning (ML) in a variety of applications. This article aims at providing a comprehensive survey on summarizing recent trends and advances in hardware accelerator design for machine learning based on various hardware platforms like ASIC, FPGA and GPU. In this article, we look at different architectures that allow NN executions in respect of computational units, network topologies, dataflow optimization and accelerators based on new technologies. The important features of the various strategies for enhancing acceleration performance are highlighted. The numerous current difficulties like fair comparison, as well as potential subjects and obstacles in this field has been examined. This study intends to provide readers with a fast overview of neural network compression and acceleration, a clear evaluation of different methods, and the confidence to get started in the right path.
ISSN:0929-6212
1572-834X
DOI:10.1007/s11277-024-11443-2