Loading…
A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators
This paper presents a charge-domain in-memory computing (IMC) macro for precision-scalable deep neural network accelerators. The proposed Dual-SRAM cell structure with coupling capacitors enables charge-domain multiply and accumulate (MAC) operation with variable-precision signed weights. Unlike pri...
Saved in:
Published in: | IEEE transactions on circuits and systems. I, Regular papers Regular papers, 2021-08, Vol.68 (8), p.3305-3316 |
---|---|
Main Authors: | , , , , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783 |
---|---|
cites | cdi_FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783 |
container_end_page | 3316 |
container_issue | 8 |
container_start_page | 3305 |
container_title | IEEE transactions on circuits and systems. I, Regular papers |
container_volume | 68 |
creator | Lee, Eunyoung Han, Taeyoung Seo, Donguk Shin, Gicheol Kim, Jaerok Kim, Seonho Jeong, Soyoun Rhe, Johnny Park, Jaehyun Ko, Jong Hwan Lee, Yoonmyung |
description | This paper presents a charge-domain in-memory computing (IMC) macro for precision-scalable deep neural network accelerators. The proposed Dual-SRAM cell structure with coupling capacitors enables charge-domain multiply and accumulate (MAC) operation with variable-precision signed weights. Unlike prior charge-domain IMC macros that only support binary neural networks or digitally compute weighted sums for MAC operation with multi-bit weights, the proposed macro implements analog weighted sums for energy-efficient bit-scalable MAC operations with a novel series-coupled merging scheme. A test chip with a 16-kb SRAM macro is fabricated in 28-nm FDSOI process, and the measured macro throughput is 125.2-876.5 GOPS for weight bit-precision varying from 2 to 8. The macro also achieves energy efficiency ranging from 18.4 TOPS/W for 8-b weight to 119.2 TOPS/W for 2-b weight. |
doi_str_mv | 10.1109/TCSI.2021.3080042 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_9437302</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9437302</ieee_id><sourcerecordid>2551364972</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783</originalsourceid><addsrcrecordid>eNo9kEtPwzAQhCMEElD4AYiLJc4u61fiHKOUR6UWEAVxjFx307pK4-Ikhx7476Rq4bS70sys5ouiGwZDxiC9_8hn4yEHzoYCNIDkJ9EFU0rT_opP97tMqRZcn0eXTbMG4CkIdhH9ZCRfmbBEOvIb42oys6Yy8wrpF7rlqiXjmk5x48OO5H6z7VpXL8nU2ODJl2tXZNSZis7esynJgl25Fm3bBSSlD-QtoHWN8zX9yySjlxeSWYsVBtP60FxFZ6WpGrw-zkH0-fjwkT_TyevTOM8m1PJUtFSDZYs5Y4ILHstSLbgBBJmUMXAzByW1iq1YMOQ6QVEaobRUyNAuwEpItBhEd4fcbfDfHTZtsfZdqPuXBVeKiVimCe9V7KDq2zVNwLLYBrcxYVcwKPaUiz3lYk-5OFLuPbcHj0PEf30qRSKAi18Ht3eN</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2551364972</pqid></control><display><type>article</type><title>A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Lee, Eunyoung ; Han, Taeyoung ; Seo, Donguk ; Shin, Gicheol ; Kim, Jaerok ; Kim, Seonho ; Jeong, Soyoun ; Rhe, Johnny ; Park, Jaehyun ; Ko, Jong Hwan ; Lee, Yoonmyung</creator><creatorcontrib>Lee, Eunyoung ; Han, Taeyoung ; Seo, Donguk ; Shin, Gicheol ; Kim, Jaerok ; Kim, Seonho ; Jeong, Soyoun ; Rhe, Johnny ; Park, Jaehyun ; Ko, Jong Hwan ; Lee, Yoonmyung</creatorcontrib><description>This paper presents a charge-domain in-memory computing (IMC) macro for precision-scalable deep neural network accelerators. The proposed Dual-SRAM cell structure with coupling capacitors enables charge-domain multiply and accumulate (MAC) operation with variable-precision signed weights. Unlike prior charge-domain IMC macros that only support binary neural networks or digitally compute weighted sums for MAC operation with multi-bit weights, the proposed macro implements analog weighted sums for energy-efficient bit-scalable MAC operations with a novel series-coupled merging scheme. A test chip with a 16-kb SRAM macro is fabricated in 28-nm FDSOI process, and the measured macro throughput is 125.2-876.5 GOPS for weight bit-precision varying from 2 to 8. The macro also achieves energy efficiency ranging from 18.4 TOPS/W for 8-b weight to 119.2 TOPS/W for 2-b weight.</description><identifier>ISSN: 1549-8328</identifier><identifier>EISSN: 1558-0806</identifier><identifier>DOI: 10.1109/TCSI.2021.3080042</identifier><identifier>CODEN: ITCSCH</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accelerators ; Artificial neural networks ; bit-scalable ; Capacitors ; charge-domain compute ; Computation ; Computer architecture ; Couplings ; deep neural networks ; Domains ; Energy efficiency ; In-memory computing ; machine learning ; Merging ; Microprocessors ; Neural networks ; SRAM cells ; Static random access memory ; Sums ; Transistors ; Weight</subject><ispartof>IEEE transactions on circuits and systems. I, Regular papers, 2021-08, Vol.68 (8), p.3305-3316</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783</citedby><cites>FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783</cites><orcidid>0000-0003-4434-4318 ; 0000-0001-9468-1692 ; 0000-0001-7105-2199 ; 0000-0002-4086-5215 ; 0000-0003-3453-3940 ; 0000-0002-6092-2168</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9437302$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Lee, Eunyoung</creatorcontrib><creatorcontrib>Han, Taeyoung</creatorcontrib><creatorcontrib>Seo, Donguk</creatorcontrib><creatorcontrib>Shin, Gicheol</creatorcontrib><creatorcontrib>Kim, Jaerok</creatorcontrib><creatorcontrib>Kim, Seonho</creatorcontrib><creatorcontrib>Jeong, Soyoun</creatorcontrib><creatorcontrib>Rhe, Johnny</creatorcontrib><creatorcontrib>Park, Jaehyun</creatorcontrib><creatorcontrib>Ko, Jong Hwan</creatorcontrib><creatorcontrib>Lee, Yoonmyung</creatorcontrib><title>A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators</title><title>IEEE transactions on circuits and systems. I, Regular papers</title><addtitle>TCSI</addtitle><description>This paper presents a charge-domain in-memory computing (IMC) macro for precision-scalable deep neural network accelerators. The proposed Dual-SRAM cell structure with coupling capacitors enables charge-domain multiply and accumulate (MAC) operation with variable-precision signed weights. Unlike prior charge-domain IMC macros that only support binary neural networks or digitally compute weighted sums for MAC operation with multi-bit weights, the proposed macro implements analog weighted sums for energy-efficient bit-scalable MAC operations with a novel series-coupled merging scheme. A test chip with a 16-kb SRAM macro is fabricated in 28-nm FDSOI process, and the measured macro throughput is 125.2-876.5 GOPS for weight bit-precision varying from 2 to 8. The macro also achieves energy efficiency ranging from 18.4 TOPS/W for 8-b weight to 119.2 TOPS/W for 2-b weight.</description><subject>Accelerators</subject><subject>Artificial neural networks</subject><subject>bit-scalable</subject><subject>Capacitors</subject><subject>charge-domain compute</subject><subject>Computation</subject><subject>Computer architecture</subject><subject>Couplings</subject><subject>deep neural networks</subject><subject>Domains</subject><subject>Energy efficiency</subject><subject>In-memory computing</subject><subject>machine learning</subject><subject>Merging</subject><subject>Microprocessors</subject><subject>Neural networks</subject><subject>SRAM cells</subject><subject>Static random access memory</subject><subject>Sums</subject><subject>Transistors</subject><subject>Weight</subject><issn>1549-8328</issn><issn>1558-0806</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNo9kEtPwzAQhCMEElD4AYiLJc4u61fiHKOUR6UWEAVxjFx307pK4-Ikhx7476Rq4bS70sys5ouiGwZDxiC9_8hn4yEHzoYCNIDkJ9EFU0rT_opP97tMqRZcn0eXTbMG4CkIdhH9ZCRfmbBEOvIb42oys6Yy8wrpF7rlqiXjmk5x48OO5H6z7VpXL8nU2ODJl2tXZNSZis7esynJgl25Fm3bBSSlD-QtoHWN8zX9yySjlxeSWYsVBtP60FxFZ6WpGrw-zkH0-fjwkT_TyevTOM8m1PJUtFSDZYs5Y4ILHstSLbgBBJmUMXAzByW1iq1YMOQ6QVEaobRUyNAuwEpItBhEd4fcbfDfHTZtsfZdqPuXBVeKiVimCe9V7KDq2zVNwLLYBrcxYVcwKPaUiz3lYk-5OFLuPbcHj0PEf30qRSKAi18Ht3eN</recordid><startdate>20210801</startdate><enddate>20210801</enddate><creator>Lee, Eunyoung</creator><creator>Han, Taeyoung</creator><creator>Seo, Donguk</creator><creator>Shin, Gicheol</creator><creator>Kim, Jaerok</creator><creator>Kim, Seonho</creator><creator>Jeong, Soyoun</creator><creator>Rhe, Johnny</creator><creator>Park, Jaehyun</creator><creator>Ko, Jong Hwan</creator><creator>Lee, Yoonmyung</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-4434-4318</orcidid><orcidid>https://orcid.org/0000-0001-9468-1692</orcidid><orcidid>https://orcid.org/0000-0001-7105-2199</orcidid><orcidid>https://orcid.org/0000-0002-4086-5215</orcidid><orcidid>https://orcid.org/0000-0003-3453-3940</orcidid><orcidid>https://orcid.org/0000-0002-6092-2168</orcidid></search><sort><creationdate>20210801</creationdate><title>A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators</title><author>Lee, Eunyoung ; Han, Taeyoung ; Seo, Donguk ; Shin, Gicheol ; Kim, Jaerok ; Kim, Seonho ; Jeong, Soyoun ; Rhe, Johnny ; Park, Jaehyun ; Ko, Jong Hwan ; Lee, Yoonmyung</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accelerators</topic><topic>Artificial neural networks</topic><topic>bit-scalable</topic><topic>Capacitors</topic><topic>charge-domain compute</topic><topic>Computation</topic><topic>Computer architecture</topic><topic>Couplings</topic><topic>deep neural networks</topic><topic>Domains</topic><topic>Energy efficiency</topic><topic>In-memory computing</topic><topic>machine learning</topic><topic>Merging</topic><topic>Microprocessors</topic><topic>Neural networks</topic><topic>SRAM cells</topic><topic>Static random access memory</topic><topic>Sums</topic><topic>Transistors</topic><topic>Weight</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lee, Eunyoung</creatorcontrib><creatorcontrib>Han, Taeyoung</creatorcontrib><creatorcontrib>Seo, Donguk</creatorcontrib><creatorcontrib>Shin, Gicheol</creatorcontrib><creatorcontrib>Kim, Jaerok</creatorcontrib><creatorcontrib>Kim, Seonho</creatorcontrib><creatorcontrib>Jeong, Soyoun</creatorcontrib><creatorcontrib>Rhe, Johnny</creatorcontrib><creatorcontrib>Park, Jaehyun</creatorcontrib><creatorcontrib>Ko, Jong Hwan</creatorcontrib><creatorcontrib>Lee, Yoonmyung</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on circuits and systems. I, Regular papers</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lee, Eunyoung</au><au>Han, Taeyoung</au><au>Seo, Donguk</au><au>Shin, Gicheol</au><au>Kim, Jaerok</au><au>Kim, Seonho</au><au>Jeong, Soyoun</au><au>Rhe, Johnny</au><au>Park, Jaehyun</au><au>Ko, Jong Hwan</au><au>Lee, Yoonmyung</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators</atitle><jtitle>IEEE transactions on circuits and systems. I, Regular papers</jtitle><stitle>TCSI</stitle><date>2021-08-01</date><risdate>2021</risdate><volume>68</volume><issue>8</issue><spage>3305</spage><epage>3316</epage><pages>3305-3316</pages><issn>1549-8328</issn><eissn>1558-0806</eissn><coden>ITCSCH</coden><abstract>This paper presents a charge-domain in-memory computing (IMC) macro for precision-scalable deep neural network accelerators. The proposed Dual-SRAM cell structure with coupling capacitors enables charge-domain multiply and accumulate (MAC) operation with variable-precision signed weights. Unlike prior charge-domain IMC macros that only support binary neural networks or digitally compute weighted sums for MAC operation with multi-bit weights, the proposed macro implements analog weighted sums for energy-efficient bit-scalable MAC operations with a novel series-coupled merging scheme. A test chip with a 16-kb SRAM macro is fabricated in 28-nm FDSOI process, and the measured macro throughput is 125.2-876.5 GOPS for weight bit-precision varying from 2 to 8. The macro also achieves energy efficiency ranging from 18.4 TOPS/W for 8-b weight to 119.2 TOPS/W for 2-b weight.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSI.2021.3080042</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0003-4434-4318</orcidid><orcidid>https://orcid.org/0000-0001-9468-1692</orcidid><orcidid>https://orcid.org/0000-0001-7105-2199</orcidid><orcidid>https://orcid.org/0000-0002-4086-5215</orcidid><orcidid>https://orcid.org/0000-0003-3453-3940</orcidid><orcidid>https://orcid.org/0000-0002-6092-2168</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1549-8328 |
ispartof | IEEE transactions on circuits and systems. I, Regular papers, 2021-08, Vol.68 (8), p.3305-3316 |
issn | 1549-8328 1558-0806 |
language | eng |
recordid | cdi_ieee_primary_9437302 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Accelerators Artificial neural networks bit-scalable Capacitors charge-domain compute Computation Computer architecture Couplings deep neural networks Domains Energy efficiency In-memory computing machine learning Merging Microprocessors Neural networks SRAM cells Static random access memory Sums Transistors Weight |
title | A Charge-Domain Scalable-Weight In-Memory Computing Macro With Dual-SRAM Architecture for Precision-Scalable DNN Accelerators |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T06%3A23%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Charge-Domain%20Scalable-Weight%20In-Memory%20Computing%20Macro%20With%20Dual-SRAM%20Architecture%20for%20Precision-Scalable%20DNN%20Accelerators&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems.%20I,%20Regular%20papers&rft.au=Lee,%20Eunyoung&rft.date=2021-08-01&rft.volume=68&rft.issue=8&rft.spage=3305&rft.epage=3316&rft.pages=3305-3316&rft.issn=1549-8328&rft.eissn=1558-0806&rft.coden=ITCSCH&rft_id=info:doi/10.1109/TCSI.2021.3080042&rft_dat=%3Cproquest_ieee_%3E2551364972%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c293t-80c1db11323264f5d2a0e047f602ab054856c3d1e287e3fa35845e1ecd0c40783%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2551364972&rft_id=info:pmid/&rft_ieee_id=9437302&rfr_iscdi=true |