Loading…
HFGCN: High-speed and Fully-optimized GCN Accelerator
graph convolutional network (GCN) is a type of neural network that inference new nodes based on the connectivity of the graphs. GCN requires high-calculation volume for processing, similar to other neural networks requiring significant calculation. In this paper, we propose a new hardware architectu...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 7 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Han, MinSeok Kim, Jiwan Kim, Donggeon Jeong, Hyunuk Jung, Gilho Oh, Myeongwon Lee, Hyundong Go, Yunjeong Kim, HyunWoo Kim, Jongbeom Song, Taigon |
description | graph convolutional network (GCN) is a type of neural network that inference new nodes based on the connectivity of the graphs. GCN requires high-calculation volume for processing, similar to other neural networks requiring significant calculation. In this paper, we propose a new hardware architecture for GCN that tackles the problem of wasted cycles during processing. We propose a new scheduler module that reduces memory access through aggregation and an optimized systolic array with improved delay. We compare our study with the state-of-the-art GCN accelerator and show outperforming results. |
doi_str_mv | 10.1109/ISQED57927.2023.10129340 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10129340</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10129340</ieee_id><sourcerecordid>10129340</sourcerecordid><originalsourceid>FETCH-LOGICAL-i204t-fc4b6f5f864e2371cb7f65d9a4d900b8a70cbd36a5680dee59a2584a01deb7b43</originalsourceid><addsrcrecordid>eNo1j81Kw0AURkdBsNa-gYu8wMQ7Pzcz467EpikURdR1mcnc6EhqQxIX9ektqKsPDocDH2OZgFwIcLeb56fVPRonTS5BqlyAkE5pOGMLZ5xVCEppg-qczYTTlivp8JJdjeMHgEY0dsawrtblw11Wp7d3PvZEMfOfMau-uu7ID_2U9un7xE5Otmwa6mjw02G4Zhet70Za_O2cvVarl7Lm28f1plxueZKgJ942OhQttrbQJJURTTBtgdF5HR1AsN5AE6IqPBYWIhE6L9FqDyJSMEGrObv57SYi2vVD2vvhuPu_qX4APDRGMQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>HFGCN: High-speed and Fully-optimized GCN Accelerator</title><source>IEEE Xplore All Conference Series</source><creator>Han, MinSeok ; Kim, Jiwan ; Kim, Donggeon ; Jeong, Hyunuk ; Jung, Gilho ; Oh, Myeongwon ; Lee, Hyundong ; Go, Yunjeong ; Kim, HyunWoo ; Kim, Jongbeom ; Song, Taigon</creator><creatorcontrib>Han, MinSeok ; Kim, Jiwan ; Kim, Donggeon ; Jeong, Hyunuk ; Jung, Gilho ; Oh, Myeongwon ; Lee, Hyundong ; Go, Yunjeong ; Kim, HyunWoo ; Kim, Jongbeom ; Song, Taigon</creatorcontrib><description>graph convolutional network (GCN) is a type of neural network that inference new nodes based on the connectivity of the graphs. GCN requires high-calculation volume for processing, similar to other neural networks requiring significant calculation. In this paper, we propose a new hardware architecture for GCN that tackles the problem of wasted cycles during processing. We propose a new scheduler module that reduces memory access through aggregation and an optimized systolic array with improved delay. We compare our study with the state-of-the-art GCN accelerator and show outperforming results.</description><identifier>EISSN: 1948-3295</identifier><identifier>EISBN: 9798350334753</identifier><identifier>DOI: 10.1109/ISQED57927.2023.10129340</identifier><language>eng</language><publisher>IEEE</publisher><subject>Convolution ; Decoding ; Delays ; Hardware ; Neural networks ; Process control ; Systolic arrays</subject><ispartof>2023 24th International Symposium on Quality Electronic Design (ISQED), 2023, p.1-7</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10129340$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27923,54553,54930</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10129340$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Han, MinSeok</creatorcontrib><creatorcontrib>Kim, Jiwan</creatorcontrib><creatorcontrib>Kim, Donggeon</creatorcontrib><creatorcontrib>Jeong, Hyunuk</creatorcontrib><creatorcontrib>Jung, Gilho</creatorcontrib><creatorcontrib>Oh, Myeongwon</creatorcontrib><creatorcontrib>Lee, Hyundong</creatorcontrib><creatorcontrib>Go, Yunjeong</creatorcontrib><creatorcontrib>Kim, HyunWoo</creatorcontrib><creatorcontrib>Kim, Jongbeom</creatorcontrib><creatorcontrib>Song, Taigon</creatorcontrib><title>HFGCN: High-speed and Fully-optimized GCN Accelerator</title><title>2023 24th International Symposium on Quality Electronic Design (ISQED)</title><addtitle>ISQED</addtitle><description>graph convolutional network (GCN) is a type of neural network that inference new nodes based on the connectivity of the graphs. GCN requires high-calculation volume for processing, similar to other neural networks requiring significant calculation. In this paper, we propose a new hardware architecture for GCN that tackles the problem of wasted cycles during processing. We propose a new scheduler module that reduces memory access through aggregation and an optimized systolic array with improved delay. We compare our study with the state-of-the-art GCN accelerator and show outperforming results.</description><subject>Convolution</subject><subject>Decoding</subject><subject>Delays</subject><subject>Hardware</subject><subject>Neural networks</subject><subject>Process control</subject><subject>Systolic arrays</subject><issn>1948-3295</issn><isbn>9798350334753</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1j81Kw0AURkdBsNa-gYu8wMQ7Pzcz467EpikURdR1mcnc6EhqQxIX9ektqKsPDocDH2OZgFwIcLeb56fVPRonTS5BqlyAkE5pOGMLZ5xVCEppg-qczYTTlivp8JJdjeMHgEY0dsawrtblw11Wp7d3PvZEMfOfMau-uu7ID_2U9un7xE5Otmwa6mjw02G4Zhet70Za_O2cvVarl7Lm28f1plxueZKgJ942OhQttrbQJJURTTBtgdF5HR1AsN5AE6IqPBYWIhE6L9FqDyJSMEGrObv57SYi2vVD2vvhuPu_qX4APDRGMQ</recordid><startdate>20230405</startdate><enddate>20230405</enddate><creator>Han, MinSeok</creator><creator>Kim, Jiwan</creator><creator>Kim, Donggeon</creator><creator>Jeong, Hyunuk</creator><creator>Jung, Gilho</creator><creator>Oh, Myeongwon</creator><creator>Lee, Hyundong</creator><creator>Go, Yunjeong</creator><creator>Kim, HyunWoo</creator><creator>Kim, Jongbeom</creator><creator>Song, Taigon</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20230405</creationdate><title>HFGCN: High-speed and Fully-optimized GCN Accelerator</title><author>Han, MinSeok ; Kim, Jiwan ; Kim, Donggeon ; Jeong, Hyunuk ; Jung, Gilho ; Oh, Myeongwon ; Lee, Hyundong ; Go, Yunjeong ; Kim, HyunWoo ; Kim, Jongbeom ; Song, Taigon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i204t-fc4b6f5f864e2371cb7f65d9a4d900b8a70cbd36a5680dee59a2584a01deb7b43</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Convolution</topic><topic>Decoding</topic><topic>Delays</topic><topic>Hardware</topic><topic>Neural networks</topic><topic>Process control</topic><topic>Systolic arrays</topic><toplevel>online_resources</toplevel><creatorcontrib>Han, MinSeok</creatorcontrib><creatorcontrib>Kim, Jiwan</creatorcontrib><creatorcontrib>Kim, Donggeon</creatorcontrib><creatorcontrib>Jeong, Hyunuk</creatorcontrib><creatorcontrib>Jung, Gilho</creatorcontrib><creatorcontrib>Oh, Myeongwon</creatorcontrib><creatorcontrib>Lee, Hyundong</creatorcontrib><creatorcontrib>Go, Yunjeong</creatorcontrib><creatorcontrib>Kim, HyunWoo</creatorcontrib><creatorcontrib>Kim, Jongbeom</creatorcontrib><creatorcontrib>Song, Taigon</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Han, MinSeok</au><au>Kim, Jiwan</au><au>Kim, Donggeon</au><au>Jeong, Hyunuk</au><au>Jung, Gilho</au><au>Oh, Myeongwon</au><au>Lee, Hyundong</au><au>Go, Yunjeong</au><au>Kim, HyunWoo</au><au>Kim, Jongbeom</au><au>Song, Taigon</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>HFGCN: High-speed and Fully-optimized GCN Accelerator</atitle><btitle>2023 24th International Symposium on Quality Electronic Design (ISQED)</btitle><stitle>ISQED</stitle><date>2023-04-05</date><risdate>2023</risdate><spage>1</spage><epage>7</epage><pages>1-7</pages><eissn>1948-3295</eissn><eisbn>9798350334753</eisbn><abstract>graph convolutional network (GCN) is a type of neural network that inference new nodes based on the connectivity of the graphs. GCN requires high-calculation volume for processing, similar to other neural networks requiring significant calculation. In this paper, we propose a new hardware architecture for GCN that tackles the problem of wasted cycles during processing. We propose a new scheduler module that reduces memory access through aggregation and an optimized systolic array with improved delay. We compare our study with the state-of-the-art GCN accelerator and show outperforming results.</abstract><pub>IEEE</pub><doi>10.1109/ISQED57927.2023.10129340</doi><tpages>7</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 1948-3295 |
ispartof | 2023 24th International Symposium on Quality Electronic Design (ISQED), 2023, p.1-7 |
issn | 1948-3295 |
language | eng |
recordid | cdi_ieee_primary_10129340 |
source | IEEE Xplore All Conference Series |
subjects | Convolution Decoding Delays Hardware Neural networks Process control Systolic arrays |
title | HFGCN: High-speed and Fully-optimized GCN Accelerator |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T00%3A27%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=HFGCN:%20High-speed%20and%20Fully-optimized%20GCN%20Accelerator&rft.btitle=2023%2024th%20International%20Symposium%20on%20Quality%20Electronic%20Design%20(ISQED)&rft.au=Han,%20MinSeok&rft.date=2023-04-05&rft.spage=1&rft.epage=7&rft.pages=1-7&rft.eissn=1948-3295&rft_id=info:doi/10.1109/ISQED57927.2023.10129340&rft.eisbn=9798350334753&rft_dat=%3Cieee_CHZPO%3E10129340%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i204t-fc4b6f5f864e2371cb7f65d9a4d900b8a70cbd36a5680dee59a2584a01deb7b43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10129340&rfr_iscdi=true |