Loading…

A solution to the learning dilemma for recurrent networks of spiking neurons

Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this p...

Full description

Saved in:
Bibliographic Details
Published in:Nature communications 2020-07, Vol.11 (1), p.3625-3625, Article 3625
Main Authors: Bellec, Guillaume, Scherr, Franz, Subramoney, Anand, Hajek, Elias, Salaj, Darjan, Legenstein, Robert, Maass, Wolfgang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123
cites cdi_FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123
container_end_page 3625
container_issue 1
container_start_page 3625
container_title Nature communications
container_volume 11
creator Bellec, Guillaume
Scherr, Franz
Subramoney, Anand
Hajek, Elias
Salaj, Darjan
Legenstein, Robert
Maass, Wolfgang
description Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence. Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.
doi_str_mv 10.1038/s41467-020-17236-y
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_7910940bc2a3480f8457777933723618</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_7910940bc2a3480f8457777933723618</doaj_id><sourcerecordid>2424565989</sourcerecordid><originalsourceid>FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123</originalsourceid><addsrcrecordid>eNp9kUtv1DAUhS0EolXpH2BliQ2b0OtXbG-QqopHpZHYwNpyHGfqaWIPdgKaf1_PpCqUBXfj1zmf7XsQekvgAwGmrgonvJUNUGiIpKxtDi_QOQVOTsuXf83P0GUpO6jFNFGcv0ZnjLaKAJBztLnGJY3LHFLEc8LzncejtzmGuMV9GP00WTykjLN3S84-zjj6-XfK9wWnAZd9uD8qo19yiuUNejXYsfjLx_EC_fj86fvN12bz7cvtzfWmcUKxubFCUdY74EqQ1iupu17bTmomwQ0gtATWA9FOdhpYKyz43rVED4R6zRSh7ALdrtw-2Z3Z5zDZfDDJBnPaSHlrbJ6DG72RmoDm0DlqGVcwKC5kLc3YsWlEVdbHlbVfuqleVL-Y7fgM-vwkhjuzTb-MZK1U_Ah4_wjI6efiy2ymUJwfRxt9WoqhnHKthVK8St_9I92lJcfaqpNKtEIrXVV0VbmcSsl-eHoMAXPM3qzZm5q9OWVvDtXEVlOp4rj1-Q_6P64HOPeucA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2424565989</pqid></control><display><type>article</type><title>A solution to the learning dilemma for recurrent networks of spiking neurons</title><source>PubMed (Medline)</source><source>Publicly Available Content Database</source><source>Nature</source><source>Springer Nature - nature.com Journals - Fully Open Access</source><creator>Bellec, Guillaume ; Scherr, Franz ; Subramoney, Anand ; Hajek, Elias ; Salaj, Darjan ; Legenstein, Robert ; Maass, Wolfgang</creator><creatorcontrib>Bellec, Guillaume ; Scherr, Franz ; Subramoney, Anand ; Hajek, Elias ; Salaj, Darjan ; Legenstein, Robert ; Maass, Wolfgang</creatorcontrib><description>Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence. Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.</description><identifier>ISSN: 2041-1723</identifier><identifier>EISSN: 2041-1723</identifier><identifier>DOI: 10.1038/s41467-020-17236-y</identifier><identifier>PMID: 32681001</identifier><language>eng</language><publisher>London: Nature Publishing Group UK</publisher><subject>631/114/116/1925 ; 631/378 ; 631/378/116/2396 ; 631/378/2591 ; 639/166/987 ; Artificial intelligence ; Back propagation ; Back propagation networks ; Data processing ; Energy efficiency ; Firing pattern ; Hardware ; Humanities and Social Sciences ; Information processing ; Learning algorithms ; Machine learning ; Mathematical models ; multidisciplinary ; Nervous system ; Neural networks ; Neurons ; Recurrent neural networks ; Science ; Science (multidisciplinary) ; Short term memory ; Spikes ; Spiking ; Synaptic plasticity ; Training</subject><ispartof>Nature communications, 2020-07, Vol.11 (1), p.3625-3625, Article 3625</ispartof><rights>The Author(s) 2020</rights><rights>The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123</citedby><cites>FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123</cites><orcidid>0000-0001-9183-5852 ; 0000-0002-1178-087X ; 0000-0002-4278-9527 ; 0000-0002-7333-9860 ; 0000-0002-8724-5507</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2424565989/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2424565989?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,885,25753,27924,27925,37012,37013,44590,53791,53793,75126</link.rule.ids></links><search><creatorcontrib>Bellec, Guillaume</creatorcontrib><creatorcontrib>Scherr, Franz</creatorcontrib><creatorcontrib>Subramoney, Anand</creatorcontrib><creatorcontrib>Hajek, Elias</creatorcontrib><creatorcontrib>Salaj, Darjan</creatorcontrib><creatorcontrib>Legenstein, Robert</creatorcontrib><creatorcontrib>Maass, Wolfgang</creatorcontrib><title>A solution to the learning dilemma for recurrent networks of spiking neurons</title><title>Nature communications</title><addtitle>Nat Commun</addtitle><description>Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence. Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.</description><subject>631/114/116/1925</subject><subject>631/378</subject><subject>631/378/116/2396</subject><subject>631/378/2591</subject><subject>639/166/987</subject><subject>Artificial intelligence</subject><subject>Back propagation</subject><subject>Back propagation networks</subject><subject>Data processing</subject><subject>Energy efficiency</subject><subject>Firing pattern</subject><subject>Hardware</subject><subject>Humanities and Social Sciences</subject><subject>Information processing</subject><subject>Learning algorithms</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>multidisciplinary</subject><subject>Nervous system</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Recurrent neural networks</subject><subject>Science</subject><subject>Science (multidisciplinary)</subject><subject>Short term memory</subject><subject>Spikes</subject><subject>Spiking</subject><subject>Synaptic plasticity</subject><subject>Training</subject><issn>2041-1723</issn><issn>2041-1723</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNp9kUtv1DAUhS0EolXpH2BliQ2b0OtXbG-QqopHpZHYwNpyHGfqaWIPdgKaf1_PpCqUBXfj1zmf7XsQekvgAwGmrgonvJUNUGiIpKxtDi_QOQVOTsuXf83P0GUpO6jFNFGcv0ZnjLaKAJBztLnGJY3LHFLEc8LzncejtzmGuMV9GP00WTykjLN3S84-zjj6-XfK9wWnAZd9uD8qo19yiuUNejXYsfjLx_EC_fj86fvN12bz7cvtzfWmcUKxubFCUdY74EqQ1iupu17bTmomwQ0gtATWA9FOdhpYKyz43rVED4R6zRSh7ALdrtw-2Z3Z5zDZfDDJBnPaSHlrbJ6DG72RmoDm0DlqGVcwKC5kLc3YsWlEVdbHlbVfuqleVL-Y7fgM-vwkhjuzTb-MZK1U_Ah4_wjI6efiy2ymUJwfRxt9WoqhnHKthVK8St_9I92lJcfaqpNKtEIrXVV0VbmcSsl-eHoMAXPM3qzZm5q9OWVvDtXEVlOp4rj1-Q_6P64HOPeucA</recordid><startdate>20200717</startdate><enddate>20200717</enddate><creator>Bellec, Guillaume</creator><creator>Scherr, Franz</creator><creator>Subramoney, Anand</creator><creator>Hajek, Elias</creator><creator>Salaj, Darjan</creator><creator>Legenstein, Robert</creator><creator>Maass, Wolfgang</creator><general>Nature Publishing Group UK</general><general>Nature Publishing Group</general><general>Nature Portfolio</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QL</scope><scope>7QP</scope><scope>7QR</scope><scope>7SN</scope><scope>7SS</scope><scope>7ST</scope><scope>7T5</scope><scope>7T7</scope><scope>7TM</scope><scope>7TO</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H94</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>LK8</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>RC3</scope><scope>SOI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-9183-5852</orcidid><orcidid>https://orcid.org/0000-0002-1178-087X</orcidid><orcidid>https://orcid.org/0000-0002-4278-9527</orcidid><orcidid>https://orcid.org/0000-0002-7333-9860</orcidid><orcidid>https://orcid.org/0000-0002-8724-5507</orcidid></search><sort><creationdate>20200717</creationdate><title>A solution to the learning dilemma for recurrent networks of spiking neurons</title><author>Bellec, Guillaume ; Scherr, Franz ; Subramoney, Anand ; Hajek, Elias ; Salaj, Darjan ; Legenstein, Robert ; Maass, Wolfgang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>631/114/116/1925</topic><topic>631/378</topic><topic>631/378/116/2396</topic><topic>631/378/2591</topic><topic>639/166/987</topic><topic>Artificial intelligence</topic><topic>Back propagation</topic><topic>Back propagation networks</topic><topic>Data processing</topic><topic>Energy efficiency</topic><topic>Firing pattern</topic><topic>Hardware</topic><topic>Humanities and Social Sciences</topic><topic>Information processing</topic><topic>Learning algorithms</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>multidisciplinary</topic><topic>Nervous system</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Recurrent neural networks</topic><topic>Science</topic><topic>Science (multidisciplinary)</topic><topic>Short term memory</topic><topic>Spikes</topic><topic>Spiking</topic><topic>Synaptic plasticity</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Bellec, Guillaume</creatorcontrib><creatorcontrib>Scherr, Franz</creatorcontrib><creatorcontrib>Subramoney, Anand</creatorcontrib><creatorcontrib>Hajek, Elias</creatorcontrib><creatorcontrib>Salaj, Darjan</creatorcontrib><creatorcontrib>Legenstein, Robert</creatorcontrib><creatorcontrib>Maass, Wolfgang</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Bacteriology Abstracts (Microbiology B)</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Ecology Abstracts</collection><collection>Entomology Abstracts (Full archive)</collection><collection>Environment Abstracts</collection><collection>Immunology Abstracts</collection><collection>Industrial and Applied Microbiology Abstracts (Microbiology A)</collection><collection>Nucleic Acids Abstracts</collection><collection>Oncogenes and Growth Factors Abstracts</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>AIDS and Cancer Research Abstracts</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>ProQuest Biological Science Collection</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>ProQuest Biological Science Journals</collection><collection>ProQuest advanced technologies &amp; aerospace journals</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Genetics Abstracts</collection><collection>Environment Abstracts</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>Directory of Open Access Journals</collection><jtitle>Nature communications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Bellec, Guillaume</au><au>Scherr, Franz</au><au>Subramoney, Anand</au><au>Hajek, Elias</au><au>Salaj, Darjan</au><au>Legenstein, Robert</au><au>Maass, Wolfgang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A solution to the learning dilemma for recurrent networks of spiking neurons</atitle><jtitle>Nature communications</jtitle><stitle>Nat Commun</stitle><date>2020-07-17</date><risdate>2020</risdate><volume>11</volume><issue>1</issue><spage>3625</spage><epage>3625</epage><pages>3625-3625</pages><artnum>3625</artnum><issn>2041-1723</issn><eissn>2041-1723</eissn><abstract>Recurrently connected networks of spiking neurons underlie the astounding information processing capabilities of the brain. Yet in spite of extensive research, how they can learn through synaptic plasticity to carry out complex network computations remains unclear. We argue that two pieces of this puzzle were provided by experimental data from neuroscience. A mathematical result tells us how these pieces need to be combined to enable biologically plausible online network learning through gradient descent, in particular deep reinforcement learning. This learning method–called e-prop–approaches the performance of backpropagation through time (BPTT), the best-known method for training recurrent neural networks in machine learning. In addition, it suggests a method for powerful on-chip learning in energy-efficient spike-based hardware for artificial intelligence. Bellec et al. present a mathematically founded approximation for gradient descent training of recurrent neural networks without backwards propagation in time. This enables biologically plausible training of spike-based neural network models with working memory and supports on-chip training of neuromorphic hardware.</abstract><cop>London</cop><pub>Nature Publishing Group UK</pub><pmid>32681001</pmid><doi>10.1038/s41467-020-17236-y</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-9183-5852</orcidid><orcidid>https://orcid.org/0000-0002-1178-087X</orcidid><orcidid>https://orcid.org/0000-0002-4278-9527</orcidid><orcidid>https://orcid.org/0000-0002-7333-9860</orcidid><orcidid>https://orcid.org/0000-0002-8724-5507</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2041-1723
ispartof Nature communications, 2020-07, Vol.11 (1), p.3625-3625, Article 3625
issn 2041-1723
2041-1723
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_7910940bc2a3480f8457777933723618
source PubMed (Medline); Publicly Available Content Database; Nature; Springer Nature - nature.com Journals - Fully Open Access
subjects 631/114/116/1925
631/378
631/378/116/2396
631/378/2591
639/166/987
Artificial intelligence
Back propagation
Back propagation networks
Data processing
Energy efficiency
Firing pattern
Hardware
Humanities and Social Sciences
Information processing
Learning algorithms
Machine learning
Mathematical models
multidisciplinary
Nervous system
Neural networks
Neurons
Recurrent neural networks
Science
Science (multidisciplinary)
Short term memory
Spikes
Spiking
Synaptic plasticity
Training
title A solution to the learning dilemma for recurrent networks of spiking neurons
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T18%3A26%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20solution%20to%20the%20learning%20dilemma%20for%20recurrent%20networks%20of%20spiking%20neurons&rft.jtitle=Nature%20communications&rft.au=Bellec,%20Guillaume&rft.date=2020-07-17&rft.volume=11&rft.issue=1&rft.spage=3625&rft.epage=3625&rft.pages=3625-3625&rft.artnum=3625&rft.issn=2041-1723&rft.eissn=2041-1723&rft_id=info:doi/10.1038/s41467-020-17236-y&rft_dat=%3Cproquest_doaj_%3E2424565989%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c583t-a5823dc048516e879bd9ab79370cf059703d019c7b90365a0edc619f12e938123%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2424565989&rft_id=info:pmid/32681001&rfr_iscdi=true