Loading…

Efficient Memory Organization for DNN Hardware Accelerator Implementation on PSoC

The use of deep learning solutions in different disciplines is increasing and their algorithms are computationally expensive in most cases. For this reason, numerous hardware accelerators have appeared to compute their operations efficiently in parallel, achieving higher performance and lower latenc...

Full description

Saved in:
Bibliographic Details
Published in:Electronics (Basel) 2021-01, Vol.10 (1), p.94
Main Authors: Rios-Navarro, Antonio, Gutierrez-Galan, Daniel, Dominguez-Morales, Juan Pedro, Piñero-Fuentes, Enrique, Duran-Lopez, Lourdes, Tapiador-Morales, Ricardo, Dominguez-Morales, Manuel Jesús
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The use of deep learning solutions in different disciplines is increasing and their algorithms are computationally expensive in most cases. For this reason, numerous hardware accelerators have appeared to compute their operations efficiently in parallel, achieving higher performance and lower latency. These algorithms need large amounts of data to feed each of their computing layers, which makes it necessary to efficiently handle the data transfers that feed and collect the information to and from the accelerators. For the implementation of these accelerators, hybrid devices are widely used, which have an embedded computer, where an operating system can be run, and a field-programmable gate array (FPGA), where the accelerator can be deployed. In this work, we present a software API that efficiently organizes the memory, preventing reallocating data from one memory area to another, which improves the native Linux driver with a 85% speed-up and reduces the frame computing time by 28% in a real application.
ISSN:2079-9292
2079-9292
DOI:10.3390/electronics10010094