Loading…
GNNHLS: Evaluating Graph Neural Network Inference via High-Level Synthesis
We present GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS, containing a software stack for data generation and baseline deployment and FPGA implementations of 6 well-tuned GNN HLS kernels. Evaluating on 4 graph datasets with distinct topologi...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We present GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS, containing a software stack for data generation and baseline deployment and FPGA implementations of 6 well-tuned GNN HLS kernels. Evaluating on 4 graph datasets with distinct topologies and scales, the results show that GNNHLS achieves up to 50.8× speedup and 423× energy reduction relative to the CPU baselines. Compared with the GPU baselines, GNNHLS achieves up to 5.16× speedup and 74.5× energy reduction. |
---|---|
ISSN: | 2576-6996 |
DOI: | 10.1109/ICCD58817.2023.00092 |