Loading…

Rethinking Pretraining as a Bridge From ANNs to SNNs

Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2024-07, Vol.35 (7), p.9054-9067
Main Authors: Lin, Yihan, Hu, Yifan, Ma, Shijie, Yu, Dongjie, Li, Guoqi
Format: Article
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spiking neural networks (SNNs) are known as typical kinds of brain-inspired models with their unique features of rich neuronal dynamics, diverse coding schemes, and low power consumption properties. How to obtain a high-accuracy model has always been the main challenge in the field of SNN. Currently, there are two mainstream methods, i.e., obtaining a converted SNN through converting a well-trained artificial NN (ANN) to its SNN counterpart or training an SNN directly. However, the inference time of a converted SNN is too long, while SNN training is generally very costly and inefficient. In this work, a new SNN training paradigm is proposed by combining the concepts of the two different training methods with the help of the pretrain technique and BP-based deep SNN training mechanism. We believe that the proposed paradigm is a more efficient pipeline for training SNNs. The pipeline includes pipe-S for static data transfer tasks and pipe-D for dynamic data transfer tasks. State-of-the-art (SOTA) results are obtained in a large-scale event-driven dataset ES-ImageNet. For training acceleration, we achieve the same (or higher) best accuracy as similar leaky-integrate-and-fire (LIF)-SNNs using 1/8 training time on ImageNet-1K and 1/2 training time on ES-ImageNet and also provide a time-accuracy benchmark for a new dataset ES-UCF101. These experimental results reveal the similarity of the functions of parameters between ANNs and SNNs and also demonstrate various potential applications of this SNN training pipeline.
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2022.3217796