Loading…
A Feasible FPGA Weightless Neural Accelerator
AI applications have recently driven the computer architecture industry towards novel and more efficient dedicated hardware accelerators and tools. Weightless Neural Networks (WNNs) is a class of Artificial Neural Networks (ANNs) often applied to pattern recognition problems. It uses a set of Random...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | AI applications have recently driven the computer architecture industry towards novel and more efficient dedicated hardware accelerators and tools. Weightless Neural Networks (WNNs) is a class of Artificial Neural Networks (ANNs) often applied to pattern recognition problems. It uses a set of Random Access Memories (RAMs) as the main mechanism for training and classifying information regarding a given input pattern. Due to its memory-based architecture, it can be easily mapped onto hardware and also greatly accelerated by means of a dedicated Register Transfer-Level (RTL) architecture designed to enable multiple memory accesses in parallel. On the other hand, a straightforward WNN hardware implementation requires too much memory resources for both ASIC and FPGA variants. This work aims at designing and evaluating a Weightless Neural accelerator designed in High-Level Synthesis (HLS). Our WNN accelerator implements Hash Tables, instead of regular RAMs, to substantially reduce its memory requirements, so that it can be implemented in a fairly small-sized Xilinx FPGA. Performance, circuit-area and power consumption results show that our accelerator can efficiently learn and classify the MNIST dataset in about 8 times faster than the system's embedded ARM processor. |
---|---|
ISSN: | 2158-1525 |
DOI: | 10.1109/ISCAS.2019.8702797 |