Loading…

A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS

A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) lear...

Full description

Saved in:
Bibliographic Details
Published in:IEEE journal of solid-state circuits 2019-04, Vol.54 (4), p.992-1002
Main Authors: Chen, Gregory K., Kumar, Raghavan, Sumbul, H. Ekin, Knag, Phil C., Krishnamurthy, Ram K.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093
cites cdi_FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093
container_end_page 1002
container_issue 4
container_start_page 992
container_title IEEE journal of solid-state circuits
container_volume 54
creator Chen, Gregory K.
Kumar, Raghavan
Sumbul, H. Ekin
Knag, Phil C.
Krishnamurthy, Ram K.
description A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) learning, and high-fan-out multicast spike communication. Structured fine-grained weight sparsity reduces synapse memory by up to 16 \times with less than 2% overhead for storing connections. Approximate computing co-optimizes the dropping flow control and benefits from algorithmic noise to process spatiotemporal spike patterns with up to 9.4 \times lower energy. The SNN achieves a peak throughput of 25.2 GSOP/s at 0.9 V, peak energy efficiency of 3.8 pJ/SOP at 525 mV, and 2.3- \mu \text{W} /neuron operation at 450 mV. On-chip unsupervised STDP trains a spiking restricted Boltzmann machine to de-noise Modified National Institute of Standards and Technology (MNIST) digits and to reconstruct natural scene images with RMSE of 0.036. Near-threshold operation, in conjunction with temporal and spatial sparsity, reduces energy by 17.4\times to 1.0- \mu \text{J} /classification in a 236 \times 20 feed-forward network that is trained to classify MNIST digits using supervised STDP. A binary-activation multilayer perceptron with 50% sparse weights is trained offline with error backpropagation to classify MNIST digits with 97.9% accuracy at 1.7- \mu \text{J} /classification.
doi_str_mv 10.1109/JSSC.2018.2884901
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_8588363</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8588363</ieee_id><sourcerecordid>2200822996</sourcerecordid><originalsourceid>FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093</originalsourceid><addsrcrecordid>eNo9kN1PwjAUxRujiYj-AcaXJj4X-rXRPpIpKgEhGQbflrK1UD662Y0YHvzf7YLx6ebm_M65uQeAe4J7hGDZH6dp0qOYiB4VgktMLkCHRJFAZMA-L0EHBwlJivE1uKnrbVg5F6QDfoaQYxmjd330pYNkitKTU1WtIesJVI376WwO08rurFvDFlL7MJrv0u_g0jYbOHMo2dgKpounOZxo5V1LKlcEl_IhZ6ntetPU0IZ0jNwBjqwbPS9gMp2lt-DKqH2t7_5mF3wEKXlFk9nLWzKcoJxK1iBCY06MMVgWElMusBiYvDAkvCO4wlERcRIPchYpwwgTopC5VCtmuBJ6JbBkXfB4zq18-XXUdZNty6N34WRG2xBKpYwDRc5U7su69tpklbcH5U8ZwVnbcta2nLUtZ38tB8_D2WO11v-8iIRgMWO_1itz7A</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2200822996</pqid></control><display><type>article</type><title>A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS</title><source>IEEE Xplore (Online service)</source><creator>Chen, Gregory K. ; Kumar, Raghavan ; Sumbul, H. Ekin ; Knag, Phil C. ; Krishnamurthy, Ram K.</creator><creatorcontrib>Chen, Gregory K. ; Kumar, Raghavan ; Sumbul, H. Ekin ; Knag, Phil C. ; Krishnamurthy, Ram K.</creatorcontrib><description><![CDATA[A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) learning, and high-fan-out multicast spike communication. Structured fine-grained weight sparsity reduces synapse memory by up to 16<inline-formula> <tex-math notation="LaTeX">\times </tex-math></inline-formula> with less than 2% overhead for storing connections. Approximate computing co-optimizes the dropping flow control and benefits from algorithmic noise to process spatiotemporal spike patterns with up to 9.4<inline-formula> <tex-math notation="LaTeX">\times </tex-math></inline-formula> lower energy. The SNN achieves a peak throughput of 25.2 GSOP/s at 0.9 V, peak energy efficiency of 3.8 pJ/SOP at 525 mV, and 2.3-<inline-formula> <tex-math notation="LaTeX">\mu \text{W} </tex-math></inline-formula>/neuron operation at 450 mV. On-chip unsupervised STDP trains a spiking restricted Boltzmann machine to de-noise Modified National Institute of Standards and Technology (MNIST) digits and to reconstruct natural scene images with RMSE of 0.036. Near-threshold operation, in conjunction with temporal and spatial sparsity, reduces energy by <inline-formula> <tex-math notation="LaTeX">17.4\times </tex-math></inline-formula> to 1.0-<inline-formula> <tex-math notation="LaTeX">\mu \text{J} </tex-math></inline-formula>/classification in a <inline-formula> <tex-math notation="LaTeX">236 \times 20 </tex-math></inline-formula> feed-forward network that is trained to classify MNIST digits using supervised STDP. A binary-activation multilayer perceptron with 50% sparse weights is trained offline with error backpropagation to classify MNIST digits with 97.9% accuracy at 1.7-<inline-formula> <tex-math notation="LaTeX">\mu \text{J} </tex-math></inline-formula>/classification.]]></description><identifier>ISSN: 0018-9200</identifier><identifier>EISSN: 1558-173X</identifier><identifier>DOI: 10.1109/JSSC.2018.2884901</identifier><identifier>CODEN: IJSCBC</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Back propagation ; Biological neural networks ; Classification ; CMOS ; Digital electronics ; Digits ; Flow control ; History ; Image reconstruction ; Learning ; Multicasting ; Multilayer perceptrons ; Near-threshold voltage circuits ; Neural networks ; neuromorphic computing ; Sparsity ; spike-timing-dependent plasticity (STDP) ; Spiking ; spiking neural networks (SNNs) ; Synapses ; System-on-chip ; Training ; Weight reduction ; weight sparsity</subject><ispartof>IEEE journal of solid-state circuits, 2019-04, Vol.54 (4), p.992-1002</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093</citedby><cites>FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093</cites><orcidid>0000-0002-4813-3844 ; 0000-0001-6812-8033</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8588363$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Chen, Gregory K.</creatorcontrib><creatorcontrib>Kumar, Raghavan</creatorcontrib><creatorcontrib>Sumbul, H. Ekin</creatorcontrib><creatorcontrib>Knag, Phil C.</creatorcontrib><creatorcontrib>Krishnamurthy, Ram K.</creatorcontrib><title>A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS</title><title>IEEE journal of solid-state circuits</title><addtitle>JSSC</addtitle><description><![CDATA[A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) learning, and high-fan-out multicast spike communication. Structured fine-grained weight sparsity reduces synapse memory by up to 16<inline-formula> <tex-math notation="LaTeX">\times </tex-math></inline-formula> with less than 2% overhead for storing connections. Approximate computing co-optimizes the dropping flow control and benefits from algorithmic noise to process spatiotemporal spike patterns with up to 9.4<inline-formula> <tex-math notation="LaTeX">\times </tex-math></inline-formula> lower energy. The SNN achieves a peak throughput of 25.2 GSOP/s at 0.9 V, peak energy efficiency of 3.8 pJ/SOP at 525 mV, and 2.3-<inline-formula> <tex-math notation="LaTeX">\mu \text{W} </tex-math></inline-formula>/neuron operation at 450 mV. On-chip unsupervised STDP trains a spiking restricted Boltzmann machine to de-noise Modified National Institute of Standards and Technology (MNIST) digits and to reconstruct natural scene images with RMSE of 0.036. Near-threshold operation, in conjunction with temporal and spatial sparsity, reduces energy by <inline-formula> <tex-math notation="LaTeX">17.4\times </tex-math></inline-formula> to 1.0-<inline-formula> <tex-math notation="LaTeX">\mu \text{J} </tex-math></inline-formula>/classification in a <inline-formula> <tex-math notation="LaTeX">236 \times 20 </tex-math></inline-formula> feed-forward network that is trained to classify MNIST digits using supervised STDP. A binary-activation multilayer perceptron with 50% sparse weights is trained offline with error backpropagation to classify MNIST digits with 97.9% accuracy at 1.7-<inline-formula> <tex-math notation="LaTeX">\mu \text{J} </tex-math></inline-formula>/classification.]]></description><subject>Back propagation</subject><subject>Biological neural networks</subject><subject>Classification</subject><subject>CMOS</subject><subject>Digital electronics</subject><subject>Digits</subject><subject>Flow control</subject><subject>History</subject><subject>Image reconstruction</subject><subject>Learning</subject><subject>Multicasting</subject><subject>Multilayer perceptrons</subject><subject>Near-threshold voltage circuits</subject><subject>Neural networks</subject><subject>neuromorphic computing</subject><subject>Sparsity</subject><subject>spike-timing-dependent plasticity (STDP)</subject><subject>Spiking</subject><subject>spiking neural networks (SNNs)</subject><subject>Synapses</subject><subject>System-on-chip</subject><subject>Training</subject><subject>Weight reduction</subject><subject>weight sparsity</subject><issn>0018-9200</issn><issn>1558-173X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNo9kN1PwjAUxRujiYj-AcaXJj4X-rXRPpIpKgEhGQbflrK1UD662Y0YHvzf7YLx6ebm_M65uQeAe4J7hGDZH6dp0qOYiB4VgktMLkCHRJFAZMA-L0EHBwlJivE1uKnrbVg5F6QDfoaQYxmjd330pYNkitKTU1WtIesJVI376WwO08rurFvDFlL7MJrv0u_g0jYbOHMo2dgKpounOZxo5V1LKlcEl_IhZ6ntetPU0IZ0jNwBjqwbPS9gMp2lt-DKqH2t7_5mF3wEKXlFk9nLWzKcoJxK1iBCY06MMVgWElMusBiYvDAkvCO4wlERcRIPchYpwwgTopC5VCtmuBJ6JbBkXfB4zq18-XXUdZNty6N34WRG2xBKpYwDRc5U7su69tpklbcH5U8ZwVnbcta2nLUtZ38tB8_D2WO11v-8iIRgMWO_1itz7A</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Chen, Gregory K.</creator><creator>Kumar, Raghavan</creator><creator>Sumbul, H. Ekin</creator><creator>Knag, Phil C.</creator><creator>Krishnamurthy, Ram K.</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-4813-3844</orcidid><orcidid>https://orcid.org/0000-0001-6812-8033</orcidid></search><sort><creationdate>20190401</creationdate><title>A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS</title><author>Chen, Gregory K. ; Kumar, Raghavan ; Sumbul, H. Ekin ; Knag, Phil C. ; Krishnamurthy, Ram K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Back propagation</topic><topic>Biological neural networks</topic><topic>Classification</topic><topic>CMOS</topic><topic>Digital electronics</topic><topic>Digits</topic><topic>Flow control</topic><topic>History</topic><topic>Image reconstruction</topic><topic>Learning</topic><topic>Multicasting</topic><topic>Multilayer perceptrons</topic><topic>Near-threshold voltage circuits</topic><topic>Neural networks</topic><topic>neuromorphic computing</topic><topic>Sparsity</topic><topic>spike-timing-dependent plasticity (STDP)</topic><topic>Spiking</topic><topic>spiking neural networks (SNNs)</topic><topic>Synapses</topic><topic>System-on-chip</topic><topic>Training</topic><topic>Weight reduction</topic><topic>weight sparsity</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Chen, Gregory K.</creatorcontrib><creatorcontrib>Kumar, Raghavan</creatorcontrib><creatorcontrib>Sumbul, H. Ekin</creatorcontrib><creatorcontrib>Knag, Phil C.</creatorcontrib><creatorcontrib>Krishnamurthy, Ram K.</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE journal of solid-state circuits</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Chen, Gregory K.</au><au>Kumar, Raghavan</au><au>Sumbul, H. Ekin</au><au>Knag, Phil C.</au><au>Krishnamurthy, Ram K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS</atitle><jtitle>IEEE journal of solid-state circuits</jtitle><stitle>JSSC</stitle><date>2019-04-01</date><risdate>2019</risdate><volume>54</volume><issue>4</issue><spage>992</spage><epage>1002</epage><pages>992-1002</pages><issn>0018-9200</issn><eissn>1558-173X</eissn><coden>IJSCBC</coden><abstract><![CDATA[A reconfigurable 4096-neuron, 1M-synapse chip in 10-nm FinFET CMOS is developed to accelerate inference and learning for many classes of spiking neural networks (SNNs). The SNN features digital circuits for leaky integrate and fire neuron models, on-chip spike-timing-dependent plasticity (STDP) learning, and high-fan-out multicast spike communication. Structured fine-grained weight sparsity reduces synapse memory by up to 16<inline-formula> <tex-math notation="LaTeX">\times </tex-math></inline-formula> with less than 2% overhead for storing connections. Approximate computing co-optimizes the dropping flow control and benefits from algorithmic noise to process spatiotemporal spike patterns with up to 9.4<inline-formula> <tex-math notation="LaTeX">\times </tex-math></inline-formula> lower energy. The SNN achieves a peak throughput of 25.2 GSOP/s at 0.9 V, peak energy efficiency of 3.8 pJ/SOP at 525 mV, and 2.3-<inline-formula> <tex-math notation="LaTeX">\mu \text{W} </tex-math></inline-formula>/neuron operation at 450 mV. On-chip unsupervised STDP trains a spiking restricted Boltzmann machine to de-noise Modified National Institute of Standards and Technology (MNIST) digits and to reconstruct natural scene images with RMSE of 0.036. Near-threshold operation, in conjunction with temporal and spatial sparsity, reduces energy by <inline-formula> <tex-math notation="LaTeX">17.4\times </tex-math></inline-formula> to 1.0-<inline-formula> <tex-math notation="LaTeX">\mu \text{J} </tex-math></inline-formula>/classification in a <inline-formula> <tex-math notation="LaTeX">236 \times 20 </tex-math></inline-formula> feed-forward network that is trained to classify MNIST digits using supervised STDP. A binary-activation multilayer perceptron with 50% sparse weights is trained offline with error backpropagation to classify MNIST digits with 97.9% accuracy at 1.7-<inline-formula> <tex-math notation="LaTeX">\mu \text{J} </tex-math></inline-formula>/classification.]]></abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSSC.2018.2884901</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-4813-3844</orcidid><orcidid>https://orcid.org/0000-0001-6812-8033</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0018-9200
ispartof IEEE journal of solid-state circuits, 2019-04, Vol.54 (4), p.992-1002
issn 0018-9200
1558-173X
language eng
recordid cdi_ieee_primary_8588363
source IEEE Xplore (Online service)
subjects Back propagation
Biological neural networks
Classification
CMOS
Digital electronics
Digits
Flow control
History
Image reconstruction
Learning
Multicasting
Multilayer perceptrons
Near-threshold voltage circuits
Neural networks
neuromorphic computing
Sparsity
spike-timing-dependent plasticity (STDP)
Spiking
spiking neural networks (SNNs)
Synapses
System-on-chip
Training
Weight reduction
weight sparsity
title A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network With On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T00%3A38%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%204096-Neuron%201M-Synapse%203.8-pJ/SOP%20Spiking%20Neural%20Network%20With%20On-Chip%20STDP%20Learning%20and%20Sparse%20Weights%20in%2010-nm%20FinFET%20CMOS&rft.jtitle=IEEE%20journal%20of%20solid-state%20circuits&rft.au=Chen,%20Gregory%20K.&rft.date=2019-04-01&rft.volume=54&rft.issue=4&rft.spage=992&rft.epage=1002&rft.pages=992-1002&rft.issn=0018-9200&rft.eissn=1558-173X&rft.coden=IJSCBC&rft_id=info:doi/10.1109/JSSC.2018.2884901&rft_dat=%3Cproquest_ieee_%3E2200822996%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c293t-12641fff09d90248087fcdf120084a05d54167c35af31388d9c9ab3f4a8eb8093%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2200822996&rft_id=info:pmid/&rft_ieee_id=8588363&rfr_iscdi=true