Loading…

Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning

Emerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficie...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on computers 2022-09, Vol.71 (9), p.2022-2035
Main Authors: Jang, Yunho, Kang, Gyuseong, Kim, Taehwan, Seo, Yeongkyo, Lee, Kyung-Jin, Park, Byong-Guk, Park, Jongsun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63
cites cdi_FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63
container_end_page 2035
container_issue 9
container_start_page 2022
container_title IEEE transactions on computers
container_volume 71
creator Jang, Yunho
Kang, Gyuseong
Kim, Taehwan
Seo, Yeongkyo
Lee, Kyung-Jin
Park, Byong-Guk
Park, Jongsun
description Emerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficient hardware design. In this paper, we propose an efficient SNN architecture with stochastic spin-orbit torque (SOT) device based multi-bit synapses. First, we present SOT device based synapse array using modified gray code. The modified gray code based synapse needs only N devices to represent 2 N levels of synapse weights. Accumulative spike technique is also adopted in the proposed synapse array, to improve ADC utilization and reduce the number of neuron updates. In addition, we propose hardware friendly algorithmic techniques to improve classification accuracies as well as energy efficiencies. Non-spike depression based stochastic spike-timing-dependent plasticity is used to reduce the overlapping input representation and classification error. Early read termination is also employed to reduce energy consumption by turning off less associated neurons. The proposed SNN processor has been implemented using 65nm CMOS process, and it shows 90% classification accuracy in MNIST dataset consuming 0.78μJ/image (training) and 0.23μJ/image (inference) of energy with an area of 1.12mm 2 .
doi_str_mv 10.1109/TC.2021.3119180
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2700412587</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9573511</ieee_id><sourcerecordid>2700412587</sourcerecordid><originalsourceid>FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63</originalsourceid><addsrcrecordid>eNo9kDtPwzAUhS0EEqUwM7BYYk57_YgdjyXlJVUtUtLZMo5DXUFS7KQS_56UVkz3DN85V_oQuiUwIQTUtMwnFCiZMEIUyeAMjUiaykSpVJyjEQDJEsU4XKKrGLcAICioESqKrrUbEztvcbEq8dztvXX4wURX4WK5xLNgN75ztuuDw3Ub8KpJ8o3f4XUT-50Le_9HlvM3vHAmNL75uEYXtfmM7uZ0x2j99FjmL8li9fyazxaJZUp0iRQ1M4ITxl0lTZZWIHlNhaJ0CJkVjotM0LSuKsNqy5WpJFAh-Lshklsn2BjdH3d3of3uXez0tu1DM7zUVAJwQtNMDtT0SNnQxhhcrXfBf5nwownogzld5vpgTp_MDY27Y8M75_5plUqWEsJ-AfkRZ0M</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2700412587</pqid></control><display><type>article</type><title>Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning</title><source>IEEE Xplore (Online service)</source><creator>Jang, Yunho ; Kang, Gyuseong ; Kim, Taehwan ; Seo, Yeongkyo ; Lee, Kyung-Jin ; Park, Byong-Guk ; Park, Jongsun</creator><creatorcontrib>Jang, Yunho ; Kang, Gyuseong ; Kim, Taehwan ; Seo, Yeongkyo ; Lee, Kyung-Jin ; Park, Byong-Guk ; Park, Jongsun</creatorcontrib><description>Emerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficient hardware design. In this paper, we propose an efficient SNN architecture with stochastic spin-orbit torque (SOT) device based multi-bit synapses. First, we present SOT device based synapse array using modified gray code. The modified gray code based synapse needs only N devices to represent 2 N levels of synapse weights. Accumulative spike technique is also adopted in the proposed synapse array, to improve ADC utilization and reduce the number of neuron updates. In addition, we propose hardware friendly algorithmic techniques to improve classification accuracies as well as energy efficiencies. Non-spike depression based stochastic spike-timing-dependent plasticity is used to reduce the overlapping input representation and classification error. Early read termination is also employed to reduce energy consumption by turning off less associated neurons. The proposed SNN processor has been implemented using 65nm CMOS process, and it shows 90% classification accuracy in MNIST dataset consuming 0.78μJ/image (training) and 0.23μJ/image (inference) of energy with an area of 1.12mm 2 .</description><identifier>ISSN: 0018-9340</identifier><identifier>EISSN: 1557-9956</identifier><identifier>DOI: 10.1109/TC.2021.3119180</identifier><identifier>CODEN: ITCOB4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Arrays ; Computer architecture ; Energy consumption ; Hardware ; Image classification ; Magnetic tunneling ; Magnetization ; Microprocessors ; Neural networks ; Neurons ; on-chip learning ; spiking neural network ; Spin-orbit torque device ; stochastic spike-timing-dependent plasticity ; Switches ; Synapses</subject><ispartof>IEEE transactions on computers, 2022-09, Vol.71 (9), p.2022-2035</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63</citedby><cites>FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63</cites><orcidid>0000-0001-8813-7025 ; 0000-0001-6269-2266 ; 0000-0001-7931-6275 ; 0000-0003-3251-0024 ; 0000-0002-5848-1287</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9573511$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Jang, Yunho</creatorcontrib><creatorcontrib>Kang, Gyuseong</creatorcontrib><creatorcontrib>Kim, Taehwan</creatorcontrib><creatorcontrib>Seo, Yeongkyo</creatorcontrib><creatorcontrib>Lee, Kyung-Jin</creatorcontrib><creatorcontrib>Park, Byong-Guk</creatorcontrib><creatorcontrib>Park, Jongsun</creatorcontrib><title>Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning</title><title>IEEE transactions on computers</title><addtitle>TC</addtitle><description>Emerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficient hardware design. In this paper, we propose an efficient SNN architecture with stochastic spin-orbit torque (SOT) device based multi-bit synapses. First, we present SOT device based synapse array using modified gray code. The modified gray code based synapse needs only N devices to represent 2 N levels of synapse weights. Accumulative spike technique is also adopted in the proposed synapse array, to improve ADC utilization and reduce the number of neuron updates. In addition, we propose hardware friendly algorithmic techniques to improve classification accuracies as well as energy efficiencies. Non-spike depression based stochastic spike-timing-dependent plasticity is used to reduce the overlapping input representation and classification error. Early read termination is also employed to reduce energy consumption by turning off less associated neurons. The proposed SNN processor has been implemented using 65nm CMOS process, and it shows 90% classification accuracy in MNIST dataset consuming 0.78μJ/image (training) and 0.23μJ/image (inference) of energy with an area of 1.12mm 2 .</description><subject>Arrays</subject><subject>Computer architecture</subject><subject>Energy consumption</subject><subject>Hardware</subject><subject>Image classification</subject><subject>Magnetic tunneling</subject><subject>Magnetization</subject><subject>Microprocessors</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>on-chip learning</subject><subject>spiking neural network</subject><subject>Spin-orbit torque device</subject><subject>stochastic spike-timing-dependent plasticity</subject><subject>Switches</subject><subject>Synapses</subject><issn>0018-9340</issn><issn>1557-9956</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kDtPwzAUhS0EEqUwM7BYYk57_YgdjyXlJVUtUtLZMo5DXUFS7KQS_56UVkz3DN85V_oQuiUwIQTUtMwnFCiZMEIUyeAMjUiaykSpVJyjEQDJEsU4XKKrGLcAICioESqKrrUbEztvcbEq8dztvXX4wURX4WK5xLNgN75ztuuDw3Ub8KpJ8o3f4XUT-50Le_9HlvM3vHAmNL75uEYXtfmM7uZ0x2j99FjmL8li9fyazxaJZUp0iRQ1M4ITxl0lTZZWIHlNhaJ0CJkVjotM0LSuKsNqy5WpJFAh-Lshklsn2BjdH3d3of3uXez0tu1DM7zUVAJwQtNMDtT0SNnQxhhcrXfBf5nwownogzld5vpgTp_MDY27Y8M75_5plUqWEsJ-AfkRZ0M</recordid><startdate>20220901</startdate><enddate>20220901</enddate><creator>Jang, Yunho</creator><creator>Kang, Gyuseong</creator><creator>Kim, Taehwan</creator><creator>Seo, Yeongkyo</creator><creator>Lee, Kyung-Jin</creator><creator>Park, Byong-Guk</creator><creator>Park, Jongsun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-8813-7025</orcidid><orcidid>https://orcid.org/0000-0001-6269-2266</orcidid><orcidid>https://orcid.org/0000-0001-7931-6275</orcidid><orcidid>https://orcid.org/0000-0003-3251-0024</orcidid><orcidid>https://orcid.org/0000-0002-5848-1287</orcidid></search><sort><creationdate>20220901</creationdate><title>Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning</title><author>Jang, Yunho ; Kang, Gyuseong ; Kim, Taehwan ; Seo, Yeongkyo ; Lee, Kyung-Jin ; Park, Byong-Guk ; Park, Jongsun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Arrays</topic><topic>Computer architecture</topic><topic>Energy consumption</topic><topic>Hardware</topic><topic>Image classification</topic><topic>Magnetic tunneling</topic><topic>Magnetization</topic><topic>Microprocessors</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>on-chip learning</topic><topic>spiking neural network</topic><topic>Spin-orbit torque device</topic><topic>stochastic spike-timing-dependent plasticity</topic><topic>Switches</topic><topic>Synapses</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jang, Yunho</creatorcontrib><creatorcontrib>Kang, Gyuseong</creatorcontrib><creatorcontrib>Kim, Taehwan</creatorcontrib><creatorcontrib>Seo, Yeongkyo</creatorcontrib><creatorcontrib>Lee, Kyung-Jin</creatorcontrib><creatorcontrib>Park, Byong-Guk</creatorcontrib><creatorcontrib>Park, Jongsun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEL</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on computers</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jang, Yunho</au><au>Kang, Gyuseong</au><au>Kim, Taehwan</au><au>Seo, Yeongkyo</au><au>Lee, Kyung-Jin</au><au>Park, Byong-Guk</au><au>Park, Jongsun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning</atitle><jtitle>IEEE transactions on computers</jtitle><stitle>TC</stitle><date>2022-09-01</date><risdate>2022</risdate><volume>71</volume><issue>9</issue><spage>2022</spage><epage>2035</epage><pages>2022-2035</pages><issn>0018-9340</issn><eissn>1557-9956</eissn><coden>ITCOB4</coden><abstract>Emerging device based spiking neural network (SNN) hardware design has been actively studied. Especially, energy and area efficient synapse crossbar has been of particular interest, but processing units for weight summations in synapse crossbar are still a main bottleneck for energy and area efficient hardware design. In this paper, we propose an efficient SNN architecture with stochastic spin-orbit torque (SOT) device based multi-bit synapses. First, we present SOT device based synapse array using modified gray code. The modified gray code based synapse needs only N devices to represent 2 N levels of synapse weights. Accumulative spike technique is also adopted in the proposed synapse array, to improve ADC utilization and reduce the number of neuron updates. In addition, we propose hardware friendly algorithmic techniques to improve classification accuracies as well as energy efficiencies. Non-spike depression based stochastic spike-timing-dependent plasticity is used to reduce the overlapping input representation and classification error. Early read termination is also employed to reduce energy consumption by turning off less associated neurons. The proposed SNN processor has been implemented using 65nm CMOS process, and it shows 90% classification accuracy in MNIST dataset consuming 0.78μJ/image (training) and 0.23μJ/image (inference) of energy with an area of 1.12mm 2 .</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TC.2021.3119180</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-8813-7025</orcidid><orcidid>https://orcid.org/0000-0001-6269-2266</orcidid><orcidid>https://orcid.org/0000-0001-7931-6275</orcidid><orcidid>https://orcid.org/0000-0003-3251-0024</orcidid><orcidid>https://orcid.org/0000-0002-5848-1287</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0018-9340
ispartof IEEE transactions on computers, 2022-09, Vol.71 (9), p.2022-2035
issn 0018-9340
1557-9956
language eng
recordid cdi_proquest_journals_2700412587
source IEEE Xplore (Online service)
subjects Arrays
Computer architecture
Energy consumption
Hardware
Image classification
Magnetic tunneling
Magnetization
Microprocessors
Neural networks
Neurons
on-chip learning
spiking neural network
Spin-orbit torque device
stochastic spike-timing-dependent plasticity
Switches
Synapses
title Stochastic SOT Device Based SNN Architecture for On-Chip Unsupervised STDP Learning
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T09%3A41%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stochastic%20SOT%20Device%20Based%20SNN%20Architecture%20for%20On-Chip%20Unsupervised%20STDP%20Learning&rft.jtitle=IEEE%20transactions%20on%20computers&rft.au=Jang,%20Yunho&rft.date=2022-09-01&rft.volume=71&rft.issue=9&rft.spage=2022&rft.epage=2035&rft.pages=2022-2035&rft.issn=0018-9340&rft.eissn=1557-9956&rft.coden=ITCOB4&rft_id=info:doi/10.1109/TC.2021.3119180&rft_dat=%3Cproquest_cross%3E2700412587%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c396t-76f3a64134ed7a85d074f269220748c6e468625fdda3fc49ad702664ba174ce63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2700412587&rft_id=info:pmid/&rft_ieee_id=9573511&rfr_iscdi=true