Loading…

Rethinking Pretraining as a Bridge From ANNs to SNNs

Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2024-07, Vol.35 (7), p.9054-9067
Main Authors: Lin, Yihan, Hu, Yifan, Ma, Shijie, Yu, Dongjie, Li, Guoqi
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c351t-3e4df4302a4a8450946c7994d6a772db602ff6d192d7eadd8084004b1ef1cb693
cites
container_end_page 9067
container_issue 7
container_start_page 9054
container_title IEEE transaction on neural networks and learning systems
container_volume 35
creator Lin, Yihan
Hu, Yifan
Ma, Shijie
Yu, Dongjie
Li, Guoqi
description Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained artificial NN (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipe-S for static data transfer tasks and pipe-D for dynamic data transfer tasks. State-of-the-art (SOTA) results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar leaky-integrate-and-fire (LIF)-SNNs using 1/8 training time on ImageNet-1K and 1/2 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate various potential applications of this SNN training pipeline.
doi_str_mv 10.1109/TNNLS.2022.3217796
format article
fullrecord <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmed_primary_36374892</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9950361</ieee_id><sourcerecordid>2736666816</sourcerecordid><originalsourceid>FETCH-LOGICAL-c351t-3e4df4302a4a8450946c7994d6a772db602ff6d192d7eadd8084004b1ef1cb693</originalsourceid><addsrcrecordid>eNpdkE1Lw0AQhhdRbKn9AwoS8OIldb-yH8darAoliq3gbdlkNzW1Sdrd5OC_d2trD87lHZhnhuEB4BLBEUJQ3i3SdDYfYYjxiGDEuWQnoI8RwzEmQpwee_7RA0PvVzAUgwmj8hz0CCOcCon7gL7Z9rOsv8p6Gb062zpd1rte-0hH9640SxtNXVNF4zT1UdtE85AX4KzQa2-HhxyA9-nDYvIUz14enyfjWZyTBLUxsdQUlECsqRY0gZKynEtJDdOcY5MxiIuCGSSx4VYbI6CgENIM2QLlGZNkAG73dzeu2XbWt6oqfW7Xa13bpvMKc8JCCcQCevMPXTWdq8N3ikAuoExIQgKF91TuGu-dLdTGlZV23wpBtdOqfrWqnVZ10BqWrg-nu6yy5rjyJzEAV3ugtNYex1ImkDBEfgDIKXhh</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3078095353</pqid></control><display><type>article</type><title>Rethinking Pretraining as a Bridge From ANNs to SNNs</title><source>IEEE Xplore (Online service)</source><creator>Lin, Yihan ; Hu, Yifan ; Ma, Shijie ; Yu, Dongjie ; Li, Guoqi</creator><creatorcontrib>Lin, Yihan ; Hu, Yifan ; Ma, Shijie ; Yu, Dongjie ; Li, Guoqi</creatorcontrib><description>Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained artificial NN (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipe-S for static data transfer tasks and pipe-D for dynamic data transfer tasks. State-of-the-art (SOTA) results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar leaky-integrate-and-fire (LIF)-SNNs using 1/8 training time on ImageNet-1K and 1/2 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate various potential applications of this SNN training pipeline.</description><identifier>ISSN: 2162-237X</identifier><identifier>ISSN: 2162-2388</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2022.3217796</identifier><identifier>PMID: 36374892</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Accuracy ; Artificial neural networks ; Data transfer (computers) ; Datasets ; Event-driven dataset ; Feature extraction ; Firing pattern ; Neural coding ; neural network (NN) analysis ; Neural networks ; Neurons ; Pipelines ; Pipes ; Power consumption ; pretraining technique ; spiking NN (SNN) ; Task analysis ; Training ; Transfer learning</subject><ispartof>IEEE transaction on neural networks and learning systems, 2024-07, Vol.35 (7), p.9054-9067</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c351t-3e4df4302a4a8450946c7994d6a772db602ff6d192d7eadd8084004b1ef1cb693</citedby><orcidid>0000-0002-7980-6626 ; 0000-0002-8994-431X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9950361$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36374892$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Yihan</creatorcontrib><creatorcontrib>Hu, Yifan</creatorcontrib><creatorcontrib>Ma, Shijie</creatorcontrib><creatorcontrib>Yu, Dongjie</creatorcontrib><creatorcontrib>Li, Guoqi</creatorcontrib><title>Rethinking Pretraining as a Bridge From ANNs to SNNs</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained artificial NN (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipe-S for static data transfer tasks and pipe-D for dynamic data transfer tasks. State-of-the-art (SOTA) results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar leaky-integrate-and-fire (LIF)-SNNs using 1/8 training time on ImageNet-1K and 1/2 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate various potential applications of this SNN training pipeline.</description><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Data transfer (computers)</subject><subject>Datasets</subject><subject>Event-driven dataset</subject><subject>Feature extraction</subject><subject>Firing pattern</subject><subject>Neural coding</subject><subject>neural network (NN) analysis</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Pipelines</subject><subject>Pipes</subject><subject>Power consumption</subject><subject>pretraining technique</subject><subject>spiking NN (SNN)</subject><subject>Task analysis</subject><subject>Training</subject><subject>Transfer learning</subject><issn>2162-237X</issn><issn>2162-2388</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpdkE1Lw0AQhhdRbKn9AwoS8OIldb-yH8darAoliq3gbdlkNzW1Sdrd5OC_d2trD87lHZhnhuEB4BLBEUJQ3i3SdDYfYYjxiGDEuWQnoI8RwzEmQpwee_7RA0PvVzAUgwmj8hz0CCOcCon7gL7Z9rOsv8p6Gb062zpd1rte-0hH9640SxtNXVNF4zT1UdtE85AX4KzQa2-HhxyA9-nDYvIUz14enyfjWZyTBLUxsdQUlECsqRY0gZKynEtJDdOcY5MxiIuCGSSx4VYbI6CgENIM2QLlGZNkAG73dzeu2XbWt6oqfW7Xa13bpvMKc8JCCcQCevMPXTWdq8N3ikAuoExIQgKF91TuGu-dLdTGlZV23wpBtdOqfrWqnVZ10BqWrg-nu6yy5rjyJzEAV3ugtNYex1ImkDBEfgDIKXhh</recordid><startdate>20240701</startdate><enddate>20240701</enddate><creator>Lin, Yihan</creator><creator>Hu, Yifan</creator><creator>Ma, Shijie</creator><creator>Yu, Dongjie</creator><creator>Li, Guoqi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-7980-6626</orcidid><orcidid>https://orcid.org/0000-0002-8994-431X</orcidid></search><sort><creationdate>20240701</creationdate><title>Rethinking Pretraining as a Bridge From ANNs to SNNs</title><author>Lin, Yihan ; Hu, Yifan ; Ma, Shijie ; Yu, Dongjie ; Li, Guoqi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c351t-3e4df4302a4a8450946c7994d6a772db602ff6d192d7eadd8084004b1ef1cb693</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Data transfer (computers)</topic><topic>Datasets</topic><topic>Event-driven dataset</topic><topic>Feature extraction</topic><topic>Firing pattern</topic><topic>Neural coding</topic><topic>neural network (NN) analysis</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Pipelines</topic><topic>Pipes</topic><topic>Power consumption</topic><topic>pretraining technique</topic><topic>spiking NN (SNN)</topic><topic>Task analysis</topic><topic>Training</topic><topic>Transfer learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Yihan</creatorcontrib><creatorcontrib>Hu, Yifan</creatorcontrib><creatorcontrib>Ma, Shijie</creatorcontrib><creatorcontrib>Yu, Dongjie</creatorcontrib><creatorcontrib>Li, Guoqi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium &amp; Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lin, Yihan</au><au>Hu, Yifan</au><au>Ma, Shijie</au><au>Yu, Dongjie</au><au>Li, Guoqi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Rethinking Pretraining as a Bridge From ANNs to SNNs</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2024-07-01</date><risdate>2024</risdate><volume>35</volume><issue>7</issue><spage>9054</spage><epage>9067</epage><pages>9054-9067</pages><issn>2162-237X</issn><issn>2162-2388</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained artificial NN (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipe-S for static data transfer tasks and pipe-D for dynamic data transfer tasks. State-of-the-art (SOTA) results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar leaky-integrate-and-fire (LIF)-SNNs using 1/8 training time on ImageNet-1K and 1/2 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate various potential applications of this SNN training pipeline.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>36374892</pmid><doi>10.1109/TNNLS.2022.3217796</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-7980-6626</orcidid><orcidid>https://orcid.org/0000-0002-8994-431X</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2162-237X
ispartof IEEE transaction on neural networks and learning systems, 2024-07, Vol.35 (7), p.9054-9067
issn 2162-237X
2162-2388
2162-2388
language eng
recordid cdi_pubmed_primary_36374892
source IEEE Xplore (Online service)
subjects Accuracy
Artificial neural networks
Data transfer (computers)
Datasets
Event-driven dataset
Feature extraction
Firing pattern
Neural coding
neural network (NN) analysis
Neural networks
Neurons
Pipelines
Pipes
Power consumption
pretraining technique
spiking NN (SNN)
Task analysis
Training
Transfer learning
title Rethinking Pretraining as a Bridge From ANNs to SNNs
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T17%3A11%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Rethinking%20Pretraining%20as%20a%20Bridge%20From%20ANNs%20to%20SNNs&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Lin,%20Yihan&rft.date=2024-07-01&rft.volume=35&rft.issue=7&rft.spage=9054&rft.epage=9067&rft.pages=9054-9067&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2022.3217796&rft_dat=%3Cproquest_pubme%3E2736666816%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c351t-3e4df4302a4a8450946c7994d6a772db602ff6d192d7eadd8084004b1ef1cb693%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3078095353&rft_id=info:pmid/36374892&rft_ieee_id=9950361&rfr_iscdi=true