Loading…

Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system

Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In...

Full description

Saved in:
Bibliographic Details
Main Authors: Schmitt, Sebastian, Klahn, Johann, Bellec, Guillaume, Grubl, Andreas, Guttler, Maurice, Hartel, Andreas, Hartmann, Stephan, Husmann, Dan, Husmann, Kai, Jeltsch, Sebastian, Karasenko, Vitali, Kleider, Mitja, Koke, Christoph, Kononov, Alexander, Mauch, Christian, Muller, Eric, Muller, Paul, Partzsch, Johannes, Petrovici, Mihai A., Schiefer, Stefan, Scholze, Stefan, Thanasoulis, Vasilis, Vogginger, Bernhard, Legenstein, Robert, Maass, Wolfgang, Mayr, Christian, Schuffny, Rene, Schemmel, Johannes, Meier, Karlheinz
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c224t-ecb844e0b60d662797772df38e4204373ad2062891038ed106ede0a02142abc03
cites
container_end_page 2234
container_issue
container_start_page 2227
container_title
container_volume
creator Schmitt, Sebastian
Klahn, Johann
Bellec, Guillaume
Grubl, Andreas
Guttler, Maurice
Hartel, Andreas
Hartmann, Stephan
Husmann, Dan
Husmann, Kai
Jeltsch, Sebastian
Karasenko, Vitali
Kleider, Mitja
Koke, Christoph
Kononov, Alexander
Mauch, Christian
Muller, Eric
Muller, Paul
Partzsch, Johannes
Petrovici, Mihai A.
Schiefer, Stefan
Scholze, Stefan
Thanasoulis, Vasilis
Vogginger, Bernhard
Legenstein, Robert
Maass, Wolfgang
Mayr, Christian
Schuffny, Rene
Schemmel, Johannes
Meier, Karlheinz
description Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.
doi_str_mv 10.1109/IJCNN.2017.7966125
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_7966125</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>7966125</ieee_id><sourcerecordid>7966125</sourcerecordid><originalsourceid>FETCH-LOGICAL-c224t-ecb844e0b60d662797772df38e4204373ad2062891038ed106ede0a02142abc03</originalsourceid><addsrcrecordid>eNotkM1OAjEUhauJiYi8gG76AoO3t6WdulPiD4bgAlyTzvSOU4GZSTuG8PZKYHXynXw5i8PYnYCxEGAfZh_TxWKMIMzYWK0FTi7YyJpcTMCCFjniJRug0CJTCsw1u0npBwCltXLA6gX9xnbXxq4OJa9d9HsXiYeG9zXxbdt2j3wVXWhC880d90QdT13YHLGhft_GDW9P8vNRW5ZuS0u-dxXFLB2Bp0PqaXfLriq3TTQ655B9vb6spu_Z_PNtNn2aZyWi6jMqi1wpgkKD1xqNNcagr2ROCkFJI51H0JhbAf-dF6DJEzhAodAVJcghuz_tBiJadzHsXDysz8fIP6DvV4k</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system</title><source>IEEE Xplore All Conference Series</source><creator>Schmitt, Sebastian ; Klahn, Johann ; Bellec, Guillaume ; Grubl, Andreas ; Guttler, Maurice ; Hartel, Andreas ; Hartmann, Stephan ; Husmann, Dan ; Husmann, Kai ; Jeltsch, Sebastian ; Karasenko, Vitali ; Kleider, Mitja ; Koke, Christoph ; Kononov, Alexander ; Mauch, Christian ; Muller, Eric ; Muller, Paul ; Partzsch, Johannes ; Petrovici, Mihai A. ; Schiefer, Stefan ; Scholze, Stefan ; Thanasoulis, Vasilis ; Vogginger, Bernhard ; Legenstein, Robert ; Maass, Wolfgang ; Mayr, Christian ; Schuffny, Rene ; Schemmel, Johannes ; Meier, Karlheinz</creator><creatorcontrib>Schmitt, Sebastian ; Klahn, Johann ; Bellec, Guillaume ; Grubl, Andreas ; Guttler, Maurice ; Hartel, Andreas ; Hartmann, Stephan ; Husmann, Dan ; Husmann, Kai ; Jeltsch, Sebastian ; Karasenko, Vitali ; Kleider, Mitja ; Koke, Christoph ; Kononov, Alexander ; Mauch, Christian ; Muller, Eric ; Muller, Paul ; Partzsch, Johannes ; Petrovici, Mihai A. ; Schiefer, Stefan ; Scholze, Stefan ; Thanasoulis, Vasilis ; Vogginger, Bernhard ; Legenstein, Robert ; Maass, Wolfgang ; Mayr, Christian ; Schuffny, Rene ; Schemmel, Johannes ; Meier, Karlheinz</creatorcontrib><description>Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.</description><identifier>EISSN: 2161-4407</identifier><identifier>EISBN: 9781509061822</identifier><identifier>EISBN: 1509061827</identifier><identifier>DOI: 10.1109/IJCNN.2017.7966125</identifier><language>eng</language><publisher>IEEE</publisher><subject>Calibration ; Hardware ; Neural networks ; Neuromorphics ; Neurons ; Training</subject><ispartof>2017 International Joint Conference on Neural Networks (IJCNN), 2017, p.2227-2234</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c224t-ecb844e0b60d662797772df38e4204373ad2062891038ed106ede0a02142abc03</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/7966125$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/7966125$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Schmitt, Sebastian</creatorcontrib><creatorcontrib>Klahn, Johann</creatorcontrib><creatorcontrib>Bellec, Guillaume</creatorcontrib><creatorcontrib>Grubl, Andreas</creatorcontrib><creatorcontrib>Guttler, Maurice</creatorcontrib><creatorcontrib>Hartel, Andreas</creatorcontrib><creatorcontrib>Hartmann, Stephan</creatorcontrib><creatorcontrib>Husmann, Dan</creatorcontrib><creatorcontrib>Husmann, Kai</creatorcontrib><creatorcontrib>Jeltsch, Sebastian</creatorcontrib><creatorcontrib>Karasenko, Vitali</creatorcontrib><creatorcontrib>Kleider, Mitja</creatorcontrib><creatorcontrib>Koke, Christoph</creatorcontrib><creatorcontrib>Kononov, Alexander</creatorcontrib><creatorcontrib>Mauch, Christian</creatorcontrib><creatorcontrib>Muller, Eric</creatorcontrib><creatorcontrib>Muller, Paul</creatorcontrib><creatorcontrib>Partzsch, Johannes</creatorcontrib><creatorcontrib>Petrovici, Mihai A.</creatorcontrib><creatorcontrib>Schiefer, Stefan</creatorcontrib><creatorcontrib>Scholze, Stefan</creatorcontrib><creatorcontrib>Thanasoulis, Vasilis</creatorcontrib><creatorcontrib>Vogginger, Bernhard</creatorcontrib><creatorcontrib>Legenstein, Robert</creatorcontrib><creatorcontrib>Maass, Wolfgang</creatorcontrib><creatorcontrib>Mayr, Christian</creatorcontrib><creatorcontrib>Schuffny, Rene</creatorcontrib><creatorcontrib>Schemmel, Johannes</creatorcontrib><creatorcontrib>Meier, Karlheinz</creatorcontrib><title>Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system</title><title>2017 International Joint Conference on Neural Networks (IJCNN)</title><addtitle>IJCNN</addtitle><description>Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.</description><subject>Calibration</subject><subject>Hardware</subject><subject>Neural networks</subject><subject>Neuromorphics</subject><subject>Neurons</subject><subject>Training</subject><issn>2161-4407</issn><isbn>9781509061822</isbn><isbn>1509061827</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2017</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkM1OAjEUhauJiYi8gG76AoO3t6WdulPiD4bgAlyTzvSOU4GZSTuG8PZKYHXynXw5i8PYnYCxEGAfZh_TxWKMIMzYWK0FTi7YyJpcTMCCFjniJRug0CJTCsw1u0npBwCltXLA6gX9xnbXxq4OJa9d9HsXiYeG9zXxbdt2j3wVXWhC880d90QdT13YHLGhft_GDW9P8vNRW5ZuS0u-dxXFLB2Bp0PqaXfLriq3TTQ655B9vb6spu_Z_PNtNn2aZyWi6jMqi1wpgkKD1xqNNcagr2ROCkFJI51H0JhbAf-dF6DJEzhAodAVJcghuz_tBiJadzHsXDysz8fIP6DvV4k</recordid><startdate>201705</startdate><enddate>201705</enddate><creator>Schmitt, Sebastian</creator><creator>Klahn, Johann</creator><creator>Bellec, Guillaume</creator><creator>Grubl, Andreas</creator><creator>Guttler, Maurice</creator><creator>Hartel, Andreas</creator><creator>Hartmann, Stephan</creator><creator>Husmann, Dan</creator><creator>Husmann, Kai</creator><creator>Jeltsch, Sebastian</creator><creator>Karasenko, Vitali</creator><creator>Kleider, Mitja</creator><creator>Koke, Christoph</creator><creator>Kononov, Alexander</creator><creator>Mauch, Christian</creator><creator>Muller, Eric</creator><creator>Muller, Paul</creator><creator>Partzsch, Johannes</creator><creator>Petrovici, Mihai A.</creator><creator>Schiefer, Stefan</creator><creator>Scholze, Stefan</creator><creator>Thanasoulis, Vasilis</creator><creator>Vogginger, Bernhard</creator><creator>Legenstein, Robert</creator><creator>Maass, Wolfgang</creator><creator>Mayr, Christian</creator><creator>Schuffny, Rene</creator><creator>Schemmel, Johannes</creator><creator>Meier, Karlheinz</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>201705</creationdate><title>Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system</title><author>Schmitt, Sebastian ; Klahn, Johann ; Bellec, Guillaume ; Grubl, Andreas ; Guttler, Maurice ; Hartel, Andreas ; Hartmann, Stephan ; Husmann, Dan ; Husmann, Kai ; Jeltsch, Sebastian ; Karasenko, Vitali ; Kleider, Mitja ; Koke, Christoph ; Kononov, Alexander ; Mauch, Christian ; Muller, Eric ; Muller, Paul ; Partzsch, Johannes ; Petrovici, Mihai A. ; Schiefer, Stefan ; Scholze, Stefan ; Thanasoulis, Vasilis ; Vogginger, Bernhard ; Legenstein, Robert ; Maass, Wolfgang ; Mayr, Christian ; Schuffny, Rene ; Schemmel, Johannes ; Meier, Karlheinz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c224t-ecb844e0b60d662797772df38e4204373ad2062891038ed106ede0a02142abc03</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Calibration</topic><topic>Hardware</topic><topic>Neural networks</topic><topic>Neuromorphics</topic><topic>Neurons</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Schmitt, Sebastian</creatorcontrib><creatorcontrib>Klahn, Johann</creatorcontrib><creatorcontrib>Bellec, Guillaume</creatorcontrib><creatorcontrib>Grubl, Andreas</creatorcontrib><creatorcontrib>Guttler, Maurice</creatorcontrib><creatorcontrib>Hartel, Andreas</creatorcontrib><creatorcontrib>Hartmann, Stephan</creatorcontrib><creatorcontrib>Husmann, Dan</creatorcontrib><creatorcontrib>Husmann, Kai</creatorcontrib><creatorcontrib>Jeltsch, Sebastian</creatorcontrib><creatorcontrib>Karasenko, Vitali</creatorcontrib><creatorcontrib>Kleider, Mitja</creatorcontrib><creatorcontrib>Koke, Christoph</creatorcontrib><creatorcontrib>Kononov, Alexander</creatorcontrib><creatorcontrib>Mauch, Christian</creatorcontrib><creatorcontrib>Muller, Eric</creatorcontrib><creatorcontrib>Muller, Paul</creatorcontrib><creatorcontrib>Partzsch, Johannes</creatorcontrib><creatorcontrib>Petrovici, Mihai A.</creatorcontrib><creatorcontrib>Schiefer, Stefan</creatorcontrib><creatorcontrib>Scholze, Stefan</creatorcontrib><creatorcontrib>Thanasoulis, Vasilis</creatorcontrib><creatorcontrib>Vogginger, Bernhard</creatorcontrib><creatorcontrib>Legenstein, Robert</creatorcontrib><creatorcontrib>Maass, Wolfgang</creatorcontrib><creatorcontrib>Mayr, Christian</creatorcontrib><creatorcontrib>Schuffny, Rene</creatorcontrib><creatorcontrib>Schemmel, Johannes</creatorcontrib><creatorcontrib>Meier, Karlheinz</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Schmitt, Sebastian</au><au>Klahn, Johann</au><au>Bellec, Guillaume</au><au>Grubl, Andreas</au><au>Guttler, Maurice</au><au>Hartel, Andreas</au><au>Hartmann, Stephan</au><au>Husmann, Dan</au><au>Husmann, Kai</au><au>Jeltsch, Sebastian</au><au>Karasenko, Vitali</au><au>Kleider, Mitja</au><au>Koke, Christoph</au><au>Kononov, Alexander</au><au>Mauch, Christian</au><au>Muller, Eric</au><au>Muller, Paul</au><au>Partzsch, Johannes</au><au>Petrovici, Mihai A.</au><au>Schiefer, Stefan</au><au>Scholze, Stefan</au><au>Thanasoulis, Vasilis</au><au>Vogginger, Bernhard</au><au>Legenstein, Robert</au><au>Maass, Wolfgang</au><au>Mayr, Christian</au><au>Schuffny, Rene</au><au>Schemmel, Johannes</au><au>Meier, Karlheinz</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system</atitle><btitle>2017 International Joint Conference on Neural Networks (IJCNN)</btitle><stitle>IJCNN</stitle><date>2017-05</date><risdate>2017</risdate><spage>2227</spage><epage>2234</epage><pages>2227-2234</pages><eissn>2161-4407</eissn><eisbn>9781509061822</eisbn><eisbn>1509061827</eisbn><abstract>Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.</abstract><pub>IEEE</pub><doi>10.1109/IJCNN.2017.7966125</doi><tpages>8</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2161-4407
ispartof 2017 International Joint Conference on Neural Networks (IJCNN), 2017, p.2227-2234
issn 2161-4407
language eng
recordid cdi_ieee_primary_7966125
source IEEE Xplore All Conference Series
subjects Calibration
Hardware
Neural networks
Neuromorphics
Neurons
Training
title Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T00%3A07%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Neuromorphic%20hardware%20in%20the%20loop:%20Training%20a%20deep%20spiking%20network%20on%20the%20BrainScaleS%20wafer-scale%20system&rft.btitle=2017%20International%20Joint%20Conference%20on%20Neural%20Networks%20(IJCNN)&rft.au=Schmitt,%20Sebastian&rft.date=2017-05&rft.spage=2227&rft.epage=2234&rft.pages=2227-2234&rft.eissn=2161-4407&rft_id=info:doi/10.1109/IJCNN.2017.7966125&rft.eisbn=9781509061822&rft.eisbn_list=1509061827&rft_dat=%3Cieee_CHZPO%3E7966125%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c224t-ecb844e0b60d662797772df38e4204373ad2062891038ed106ede0a02142abc03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=7966125&rfr_iscdi=true