Loading…
Empowering edge devices: FPGA‐based 16‐bit fixed‐point accelerator with SVD for CNN on 32‐bit memory‐limited systems
Convolutional neural networks (CNNs) are now often used in deep learning and computer vision applications. Its convolutional layer accounts for most calculations and should be computed fast in a local edge device. Field‐programmable gate arrays (FPGAs) have been adequately explored as promising hard...
Saved in:
Published in: | International journal of circuit theory and applications 2024-09, Vol.52 (9), p.4755-4782 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Convolutional neural networks (CNNs) are now often used in deep learning and computer vision applications. Its convolutional layer accounts for most calculations and should be computed fast in a local edge device. Field‐programmable gate arrays (FPGAs) have been adequately explored as promising hardware accelerators for CNNs due to their high performance, energy efficiency, and reconfigurability. This paper developed an efficient FPGA‐based 16‐bit fixed‐point hardware accelerator unit for deep learning applications on the 32‐bit low‐memory edge device (PYNQ‐Z2 board). Additionally, singular value decomposition is applied to the fully connected layer for dimensionality reduction of weight parameters. The accelerator unit was designed for all five layers and employed eight processing elements in convolution layers 1 and 2 for parallel computations. In addition, array partitioning, loop unrolling, and pipelining are the techniques used to increase the speed of calculations. The AXI‐Lite interface was also used to communicate between IP and other blocks. Moreover, the design is tested with grayscale image classification on MNIST handwritten digit dataset and color image classification on the Tumor dataset. The experimental results show that the proposed accelerator unit implementation performs faster than the software‐based implementation. Its inference speed is 89.03% more than INTEL 3‐core CPU, 86.12% higher than Haswell 2‐core CPU, and 82.45% more than NVIDIA Tesla K80 GPU. Furthermore, the throughput of the proposed design is 4.33GOP/s, which is better than the conventional CNN accelerator architectures.
This paper introduces a 16‐bit fixed‐point field‐programmable gate array (FPGA)‐based hardware accelerator for deep learning on a 32‐bit low‐memory edge device (PYNQ‐Z2 board). Singular value decomposition (SVD) optimizes the fully connected layer. The accelerator unit spans all five layers, leveraging eight processing elements for parallel computations in convolution layers 1 and 2. Techniques like array partitioning, loop unrolling, and pipelining enhance computation speed. The accelerator outperforms software‐based implementations by 89.03%, 86.12%, and 82.45% against INTEL 3‐core CPU, Haswell 2‐core CPU, and NVIDIA Tesla K80 GPU, respectively. |
---|---|
ISSN: | 0098-9886 1097-007X |
DOI: | 10.1002/cta.3957 |