Loading…
Auto Batching Scheme for Optimizing LSTM Inference on FPGA Platforms
This paper presents an innovative auto batching scheme designed to optimize Long Short-Term Memory (LSTM) inference on Field-Programmable Gate Array (FPGA) platforms. Existing block batching methods face challenges with LSTM models that have large hidden sizes due to insufficient on-chip memory, whi...
Saved in:
Published in: | IEEE access 2024, Vol.12, p.159380-159394 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper presents an innovative auto batching scheme designed to optimize Long Short-Term Memory (LSTM) inference on Field-Programmable Gate Array (FPGA) platforms. Existing block batching methods face challenges with LSTM models that have large hidden sizes due to insufficient on-chip memory, which impedes prefetching and leads to repeated evictions and reloads, significantly reducing processing utilization. Our approach extends block batching with weight stationary block batching (WSBB), allowing computation without stalls regardless of prefetch availability.Additionally, bypass-enabled block batching (BEBB) ensures that even when on-chip memory is insufficient, it prevents contamination on-chip while fully leveraging off-chip memory bandwidth. Experimental results from both synthetic benchmarks (Deepbench suite) and real-world applications (RNN-T) validate the superior performance and efficiency of the proposed method. Our auto batching scheme demonstrates up to 3.7 times speedup over previous block batching while maintaining high computational efficiency, even with limited on-chip memory. Furthermore, the FPGA-based implementation of our scheme achieves a 5 times speedup over the CPU and 4.3 times higher energy efficiency (GFLOP/s/W) compared to the GPU. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3488033 |