Loading…
Hardware Acceleration of Multilayer Perceptron Based on Inter-Layer Optimization
Multilayer Perceptron (MLP) is used in a broad range of applications. Hardware acceleration of MLP is one most promising way to provide better performance-energy efficiency. Previous works focused on the intra-layer optimization and layer-after-layer processing, while leaving the inter-layer optimiz...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Multilayer Perceptron (MLP) is used in a broad range of applications. Hardware acceleration of MLP is one most promising way to provide better performance-energy efficiency. Previous works focused on the intra-layer optimization and layer-after-layer processing, while leaving the inter-layer optimization never studied. In this paper, we propose hardware acceleration of MLPs based on inter-layer optimization which allows us to overlap the execution of MLP layers. First we describe the inter-layer optimization from software and mathematical perspectives. Then, a reference Two-Neuron architecture which is efficient to support the inter-layer optimization is proposed and implemented. Discussions about area cost, performance and energy consumption are carried out to explore the scalability of the Two-Neuron architecture. Results show that the proposed MLP design optimized across layers achieves better performance and energy efficiency than the conventional intra-layer optimized designs. As such, the inter-layer optimization provides another possible direction other than the intra-layer optimization to gain further performance and energy improvements for the hardware acceleration of MLPs. |
---|---|
ISSN: | 2576-6996 |
DOI: | 10.1109/ICCD46524.2019.00028 |