Loading…

An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers

The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to the immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate T...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on very large scale integration (VLSI) systems 2022-11, Vol.30 (11), p.1573-1586
Main Authors: Fang, Chao, Zhou, Aojun, Wang, Zhongfeng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583
cites cdi_FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583
container_end_page 1586
container_issue 11
container_start_page 1573
container_title IEEE transactions on very large scale integration (VLSI) systems
container_volume 30
creator Fang, Chao
Zhou, Aojun
Wang, Zhongfeng
description The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to the immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere graphics processing units (GPUs) leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. First, from an algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. Second, from a hardware perspective, we present a flexible and efficient hardware architecture, namely, STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that, compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve 14.47\times and 11.33\times speedups compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform 2.00 \,\,\sim 19.47 \times faster inference than the state-of-the-art field-programmable gate array (FPGA)-based accelerators for Transformers.
doi_str_mv 10.1109/TVLSI.2022.3197282
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_journals_2727044909</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9857911</ieee_id><sourcerecordid>2727044909</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583</originalsourceid><addsrcrecordid>eNo9kE1PwzAMhiMEEmPwB-BSiXNHnDRNwq2aGJs02GEb1yht09GxfuB0muDX0zGEL_bhfWz5IeQW6AiA6ofV23w5GzHK2IiDlkyxMzIAIWSo-zrvZxrzUDGgl-TK-y2lEEWaDsg6qYNkt2mw7N6rcGoxP1h0wbgJF21XVuW3y4MJ2sodGvwIigaDJMvczqHtynoTvD6-BMvWonfBCm3t-0Dl0F-Ti8LuvLv560OynjytxtNwvniejZN5mDEtulBHShdQSEipiIuUK6ldpARVhbARZIrlmqcpWAaO6phDHgsqpOTKacsjofiQ3J_2tth87p3vzLbZY92fNEwySY8_6j7FTqkMG-_RFabFsrL4ZYCaoz7zq88c9Zk_fT10d4JK59w_oJWQGoD_AHdSaoA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2727044909</pqid></control><display><type>article</type><title>An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Fang, Chao ; Zhou, Aojun ; Wang, Zhongfeng</creator><creatorcontrib>Fang, Chao ; Zhou, Aojun ; Wang, Zhongfeng</creatorcontrib><description><![CDATA[The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to the immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere graphics processing units (GPUs) leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. First, from an algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. Second, from a hardware perspective, we present a flexible and efficient hardware architecture, namely, STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that, compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve <inline-formula> <tex-math notation="LaTeX">14.47\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">11.33\times </tex-math></inline-formula> speedups compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform <inline-formula> <tex-math notation="LaTeX">2.00 \,\,\sim 19.47 \times </tex-math></inline-formula> faster inference than the state-of-the-art field-programmable gate array (FPGA)-based accelerators for Transformers.]]></description><identifier>ISSN: 1063-8210</identifier><identifier>EISSN: 1557-9999</identifier><identifier>DOI: 10.1109/TVLSI.2022.3197282</identifier><identifier>CODEN: ITCOB4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; Algorithm–hardware codesign ; Computational modeling ; Constraint modelling ; Engines ; Field programmable gate arrays ; Graphics processing units ; Hardware ; hardware accelerator ; Machine learning ; model compression ; Network latency ; Optimization ; pruning ; Sparse matrices ; Sparsity ; Transformer ; Transformers</subject><ispartof>IEEE transactions on very large scale integration (VLSI) systems, 2022-11, Vol.30 (11), p.1573-1586</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583</citedby><cites>FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583</cites><orcidid>0000-0002-7227-4786 ; 0000-0002-4742-8624 ; 0000-0003-3430-1189</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9857911$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,54795</link.rule.ids></links><search><creatorcontrib>Fang, Chao</creatorcontrib><creatorcontrib>Zhou, Aojun</creatorcontrib><creatorcontrib>Wang, Zhongfeng</creatorcontrib><title>An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers</title><title>IEEE transactions on very large scale integration (VLSI) systems</title><addtitle>TVLSI</addtitle><description><![CDATA[The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to the immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere graphics processing units (GPUs) leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. First, from an algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. Second, from a hardware perspective, we present a flexible and efficient hardware architecture, namely, STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that, compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve <inline-formula> <tex-math notation="LaTeX">14.47\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">11.33\times </tex-math></inline-formula> speedups compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform <inline-formula> <tex-math notation="LaTeX">2.00 \,\,\sim 19.47 \times </tex-math></inline-formula> faster inference than the state-of-the-art field-programmable gate array (FPGA)-based accelerators for Transformers.]]></description><subject>Algorithms</subject><subject>Algorithm–hardware codesign</subject><subject>Computational modeling</subject><subject>Constraint modelling</subject><subject>Engines</subject><subject>Field programmable gate arrays</subject><subject>Graphics processing units</subject><subject>Hardware</subject><subject>hardware accelerator</subject><subject>Machine learning</subject><subject>model compression</subject><subject>Network latency</subject><subject>Optimization</subject><subject>pruning</subject><subject>Sparse matrices</subject><subject>Sparsity</subject><subject>Transformer</subject><subject>Transformers</subject><issn>1063-8210</issn><issn>1557-9999</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kE1PwzAMhiMEEmPwB-BSiXNHnDRNwq2aGJs02GEb1yht09GxfuB0muDX0zGEL_bhfWz5IeQW6AiA6ofV23w5GzHK2IiDlkyxMzIAIWSo-zrvZxrzUDGgl-TK-y2lEEWaDsg6qYNkt2mw7N6rcGoxP1h0wbgJF21XVuW3y4MJ2sodGvwIigaDJMvczqHtynoTvD6-BMvWonfBCm3t-0Dl0F-Ti8LuvLv560OynjytxtNwvniejZN5mDEtulBHShdQSEipiIuUK6ldpARVhbARZIrlmqcpWAaO6phDHgsqpOTKacsjofiQ3J_2tth87p3vzLbZY92fNEwySY8_6j7FTqkMG-_RFabFsrL4ZYCaoz7zq88c9Zk_fT10d4JK59w_oJWQGoD_AHdSaoA</recordid><startdate>20221101</startdate><enddate>20221101</enddate><creator>Fang, Chao</creator><creator>Zhou, Aojun</creator><creator>Wang, Zhongfeng</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-7227-4786</orcidid><orcidid>https://orcid.org/0000-0002-4742-8624</orcidid><orcidid>https://orcid.org/0000-0003-3430-1189</orcidid></search><sort><creationdate>20221101</creationdate><title>An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers</title><author>Fang, Chao ; Zhou, Aojun ; Wang, Zhongfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Algorithm–hardware codesign</topic><topic>Computational modeling</topic><topic>Constraint modelling</topic><topic>Engines</topic><topic>Field programmable gate arrays</topic><topic>Graphics processing units</topic><topic>Hardware</topic><topic>hardware accelerator</topic><topic>Machine learning</topic><topic>model compression</topic><topic>Network latency</topic><topic>Optimization</topic><topic>pruning</topic><topic>Sparse matrices</topic><topic>Sparsity</topic><topic>Transformer</topic><topic>Transformers</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Fang, Chao</creatorcontrib><creatorcontrib>Zhou, Aojun</creatorcontrib><creatorcontrib>Wang, Zhongfeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE/IET Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on very large scale integration (VLSI) systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Fang, Chao</au><au>Zhou, Aojun</au><au>Wang, Zhongfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers</atitle><jtitle>IEEE transactions on very large scale integration (VLSI) systems</jtitle><stitle>TVLSI</stitle><date>2022-11-01</date><risdate>2022</risdate><volume>30</volume><issue>11</issue><spage>1573</spage><epage>1586</epage><pages>1573-1586</pages><issn>1063-8210</issn><eissn>1557-9999</eissn><coden>ITCOB4</coden><abstract><![CDATA[The Transformer has been an indispensable staple in deep learning. However, for real-life applications, it is very challenging to deploy efficient Transformers due to the immense parameters and operations of models. To relieve this burden, exploiting sparsity is an effective approach to accelerate Transformers. Newly emerging Ampere graphics processing units (GPUs) leverage a 2:4 sparsity pattern to achieve model acceleration, while it can hardly meet the diverse algorithm and hardware constraints when deploying models. By contrast, we propose an algorithm-hardware co-optimized framework to flexibly and efficiently accelerate Transformers by utilizing general N:M sparsity patterns. First, from an algorithm perspective, we propose a sparsity inheritance mechanism along with inherited dynamic pruning (IDP) to obtain a series of N:M sparse candidate Transformers rapidly. A model compression scheme is further proposed to significantly reduce the storage requirement for deployment. Second, from a hardware perspective, we present a flexible and efficient hardware architecture, namely, STA, to achieve significant speedup when deploying N:M sparse Transformers. STA features not only a computing engine unifying both sparse-dense and dense-dense matrix multiplications with high computational efficiency but also a scalable softmax module eliminating the latency from intermediate off-chip data communication. Experimental results show that, compared to other methods, N:M sparse Transformers, generated using IDP, achieves an average of 6.7% improvement on accuracy with high training efficiency. Moreover, STA can achieve <inline-formula> <tex-math notation="LaTeX">14.47\times </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">11.33\times </tex-math></inline-formula> speedups compared to Intel i9-9900X and NVIDIA RTX 2080 Ti, respectively, and perform <inline-formula> <tex-math notation="LaTeX">2.00 \,\,\sim 19.47 \times </tex-math></inline-formula> faster inference than the state-of-the-art field-programmable gate array (FPGA)-based accelerators for Transformers.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TVLSI.2022.3197282</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-7227-4786</orcidid><orcidid>https://orcid.org/0000-0002-4742-8624</orcidid><orcidid>https://orcid.org/0000-0003-3430-1189</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1063-8210
ispartof IEEE transactions on very large scale integration (VLSI) systems, 2022-11, Vol.30 (11), p.1573-1586
issn 1063-8210
1557-9999
language eng
recordid cdi_proquest_journals_2727044909
source IEEE Electronic Library (IEL) Journals
subjects Algorithms
Algorithm–hardware codesign
Computational modeling
Constraint modelling
Engines
Field programmable gate arrays
Graphics processing units
Hardware
hardware accelerator
Machine learning
model compression
Network latency
Optimization
pruning
Sparse matrices
Sparsity
Transformer
Transformers
title An Algorithm-Hardware Co-Optimized Framework for Accelerating N:M Sparse Transformers
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T23%3A39%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=An%20Algorithm-Hardware%20Co-Optimized%20Framework%20for%20Accelerating%20N:M%20Sparse%20Transformers&rft.jtitle=IEEE%20transactions%20on%20very%20large%20scale%20integration%20(VLSI)%20systems&rft.au=Fang,%20Chao&rft.date=2022-11-01&rft.volume=30&rft.issue=11&rft.spage=1573&rft.epage=1586&rft.pages=1573-1586&rft.issn=1063-8210&rft.eissn=1557-9999&rft.coden=ITCOB4&rft_id=info:doi/10.1109/TVLSI.2022.3197282&rft_dat=%3Cproquest_ieee_%3E2727044909%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c295t-9489f1f71b056fb3879e48508f5a41c82d93bb1a21e09631d65057738e9a34583%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2727044909&rft_id=info:pmid/&rft_ieee_id=9857911&rfr_iscdi=true