Loading…
Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can...
Saved in:
Published in: | Frontiers in neuroscience 2021-06, Vol.15, p.690418-690418 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483 |
---|---|
cites | cdi_FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483 |
container_end_page | 690418 |
container_issue | |
container_start_page | 690418 |
container_title | Frontiers in neuroscience |
container_volume | 15 |
creator | Choi, Wooseok Kwak, Myonghoon Kim, Seyoung Hwang, Hyunsang |
description | Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiO
x
RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices. |
doi_str_mv | 10.3389/fnins.2021.690418 |
format | article |
fullrecord | <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_a9b0edfebd4e4f6fad57ba40fdda8767</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_a9b0edfebd4e4f6fad57ba40fdda8767</doaj_id><sourcerecordid>2544652770</sourcerecordid><originalsourceid>FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483</originalsourceid><addsrcrecordid>eNpdkctuFDEQRS0EIg_4AHYtsWHTg9t2-7FBGsIjkUKQQhDsLD_KEw897YndTTR_j5OJEGHlkuvoqHQvQq86vKBUqrdhjGNZEEy6BVeYdfIJOuw4Jy3r6c-n_8wH6KiUNcacSEaeowPKCJNMkUP04QLmbIbmAqbblH81V9nEal01S-dggGymmMbmR5yum8vL5Zf2vSngm9OdzdE333aj2RYoL9CzYIYCLx_eY_T908erk9P2_Ovns5PleeuYwFPbCeEFEyQwpYyk3HPbe4UtcbRXzDniiO1Z4NYpzkzonLUWuJRegg3AJD1GZ3uvT2attzluTN7pZKK-_0h5pU2eohtAG2Ux-ADWM6jKYHwvrGE4eG-k4KK63u1d29luwDsYp5rDI-njzRiv9Sr91pJwRjCvgjcPgpxuZiiT3sRSMxvMCGkumvR9pYjEtKKv_0PXac5jjapSjPGeCIEr1e0pl1MpGcLfYzqs7_rW933ru771vm_6B4iRntc</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2544652770</pqid></control><display><type>article</type><title>Neural Network Training Acceleration With RRAM-Based Hybrid Synapses</title><source>NCBI_PubMed Central(免费)</source><source>Publicly Available Content Database</source><creator>Choi, Wooseok ; Kwak, Myonghoon ; Kim, Seyoung ; Hwang, Hyunsang</creator><creatorcontrib>Choi, Wooseok ; Kwak, Myonghoon ; Kim, Seyoung ; Hwang, Hyunsang</creatorcontrib><description>Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiO
x
RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.</description><identifier>ISSN: 1662-453X</identifier><identifier>ISSN: 1662-4548</identifier><identifier>EISSN: 1662-453X</identifier><identifier>DOI: 10.3389/fnins.2021.690418</identifier><identifier>PMID: 34248492</identifier><language>eng</language><publisher>Lausanne: Frontiers Research Foundation</publisher><subject>Accuracy ; Artificial intelligence ; Conductance ; crossbar array ; hardware neural networks ; hybrid synapse ; Learning ; Neural networks ; Neurons ; Neuroscience ; Number systems ; online training ; resistive memory</subject><ispartof>Frontiers in neuroscience, 2021-06, Vol.15, p.690418-690418</ispartof><rights>2021. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>Copyright © 2021 Choi, Kwak, Kim and Hwang. 2021 Choi, Kwak, Kim and Hwang</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483</citedby><cites>FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2544652770/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2544652770?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25751,27922,27923,37010,37011,44588,53789,53791,74896</link.rule.ids></links><search><creatorcontrib>Choi, Wooseok</creatorcontrib><creatorcontrib>Kwak, Myonghoon</creatorcontrib><creatorcontrib>Kim, Seyoung</creatorcontrib><creatorcontrib>Hwang, Hyunsang</creatorcontrib><title>Neural Network Training Acceleration With RRAM-Based Hybrid Synapses</title><title>Frontiers in neuroscience</title><description>Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiO
x
RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.</description><subject>Accuracy</subject><subject>Artificial intelligence</subject><subject>Conductance</subject><subject>crossbar array</subject><subject>hardware neural networks</subject><subject>hybrid synapse</subject><subject>Learning</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Neuroscience</subject><subject>Number systems</subject><subject>online training</subject><subject>resistive memory</subject><issn>1662-453X</issn><issn>1662-4548</issn><issn>1662-453X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkctuFDEQRS0EIg_4AHYtsWHTg9t2-7FBGsIjkUKQQhDsLD_KEw897YndTTR_j5OJEGHlkuvoqHQvQq86vKBUqrdhjGNZEEy6BVeYdfIJOuw4Jy3r6c-n_8wH6KiUNcacSEaeowPKCJNMkUP04QLmbIbmAqbblH81V9nEal01S-dggGymmMbmR5yum8vL5Zf2vSngm9OdzdE333aj2RYoL9CzYIYCLx_eY_T908erk9P2_Ovns5PleeuYwFPbCeEFEyQwpYyk3HPbe4UtcbRXzDniiO1Z4NYpzkzonLUWuJRegg3AJD1GZ3uvT2attzluTN7pZKK-_0h5pU2eohtAG2Ux-ADWM6jKYHwvrGE4eG-k4KK63u1d29luwDsYp5rDI-njzRiv9Sr91pJwRjCvgjcPgpxuZiiT3sRSMxvMCGkumvR9pYjEtKKv_0PXac5jjapSjPGeCIEr1e0pl1MpGcLfYzqs7_rW933ru771vm_6B4iRntc</recordid><startdate>20210624</startdate><enddate>20210624</enddate><creator>Choi, Wooseok</creator><creator>Kwak, Myonghoon</creator><creator>Kim, Seyoung</creator><creator>Hwang, Hyunsang</creator><general>Frontiers Research Foundation</general><general>Frontiers Media S.A</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7XB</scope><scope>88I</scope><scope>8FE</scope><scope>8FH</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>LK8</scope><scope>M2P</scope><scope>M7P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope></search><sort><creationdate>20210624</creationdate><title>Neural Network Training Acceleration With RRAM-Based Hybrid Synapses</title><author>Choi, Wooseok ; Kwak, Myonghoon ; Kim, Seyoung ; Hwang, Hyunsang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Accuracy</topic><topic>Artificial intelligence</topic><topic>Conductance</topic><topic>crossbar array</topic><topic>hardware neural networks</topic><topic>hybrid synapse</topic><topic>Learning</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Neuroscience</topic><topic>Number systems</topic><topic>online training</topic><topic>resistive memory</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Choi, Wooseok</creatorcontrib><creatorcontrib>Kwak, Myonghoon</creatorcontrib><creatorcontrib>Kim, Seyoung</creatorcontrib><creatorcontrib>Hwang, Hyunsang</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Science Database (Alumni Edition)</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>Biological Sciences</collection><collection>ProQuest Science Journals</collection><collection>Biological Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Frontiers in neuroscience</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Choi, Wooseok</au><au>Kwak, Myonghoon</au><au>Kim, Seyoung</au><au>Hwang, Hyunsang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Network Training Acceleration With RRAM-Based Hybrid Synapses</atitle><jtitle>Frontiers in neuroscience</jtitle><date>2021-06-24</date><risdate>2021</risdate><volume>15</volume><spage>690418</spage><epage>690418</epage><pages>690418-690418</pages><issn>1662-453X</issn><issn>1662-4548</issn><eissn>1662-453X</eissn><abstract>Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiO
x
RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.</abstract><cop>Lausanne</cop><pub>Frontiers Research Foundation</pub><pmid>34248492</pmid><doi>10.3389/fnins.2021.690418</doi><tpages>1</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1662-453X |
ispartof | Frontiers in neuroscience, 2021-06, Vol.15, p.690418-690418 |
issn | 1662-453X 1662-4548 1662-453X |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_a9b0edfebd4e4f6fad57ba40fdda8767 |
source | NCBI_PubMed Central(免费); Publicly Available Content Database |
subjects | Accuracy Artificial intelligence Conductance crossbar array hardware neural networks hybrid synapse Learning Neural networks Neurons Neuroscience Number systems online training resistive memory |
title | Neural Network Training Acceleration With RRAM-Based Hybrid Synapses |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T13%3A40%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Network%20Training%20Acceleration%20With%20RRAM-Based%20Hybrid%20Synapses&rft.jtitle=Frontiers%20in%20neuroscience&rft.au=Choi,%20Wooseok&rft.date=2021-06-24&rft.volume=15&rft.spage=690418&rft.epage=690418&rft.pages=690418-690418&rft.issn=1662-453X&rft.eissn=1662-453X&rft_id=info:doi/10.3389/fnins.2021.690418&rft_dat=%3Cproquest_doaj_%3E2544652770%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c470t-177d7472f499a836d6b5d90b2c3594cc2c2b54f6bc964af1cbbbe688d8ebfe483%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2544652770&rft_id=info:pmid/34248492&rfr_iscdi=true |