Loading…
Performance Analysis of Bit-Width Reduced Floating-Point Arithmetic Units in FPGAs: A Case Study of Neural Network-Based Face Detector
This paper implements a field programmable gate array- (FPGA-) based face detector using a neural network (NN) and the bit-width reduced floating-point arithmetic unit (FPU). The analytical error model, using the maximum relative representation error (MRRE) and the average relative representation er...
Saved in:
Published in: | EURASIP journal on embedded systems 2009-07, Vol.2009 (1), p.258921-258921 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper implements a field programmable gate array- (FPGA-) based face detector using a neural network (NN) and the bit-width reduced floating-point arithmetic unit (FPU). The analytical error model, using the maximum relative representation error (MRRE) and the average relative representation error (ARRE), is developed to obtain the maximum and average output errors for the bit-width reduced FPUs. After the development of the analytical error model, the bit-width reduced FPUs and an NN are designed using MATLAB and VHDL. Finally, the analytical (MATLAB) results, along with the experimental (VHDL) results, are compared. The analytical results and the experimental results show conformity of shape. We demonstrate that incremented reductions in the number of bits used can produce significant cost reductions including area, speed, and power. |
---|---|
ISSN: | 1687-3955 1687-3963 1687-3963 |
DOI: | 10.1186/1687-3963-2009-258921 |