Loading…

Desire backpropagation: A lightweight training algorithm for multi-layer spiking neural networks based on spike-timing-dependent plasticity

Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks when resource efficiency and computational complexity are of importance. A major advantage of SNNs is their binary information transfer through spike trains which eliminates multiplication operations....

Full description

Saved in:
Bibliographic Details
Published in:Neurocomputing (Amsterdam) 2023-12, Vol.560, p.126773, Article 126773
Main Authors: Gerlinghoff, Daniel, Luo, Tao, Goh, Rick Siow Mong, Wong, Weng-Fai
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks when resource efficiency and computational complexity are of importance. A major advantage of SNNs is their binary information transfer through spike trains which eliminates multiplication operations. The training of SNNs has, however, been a challenge, since neuron models are non-differentiable and traditional gradient-based backpropagation algorithms cannot be applied directly. Furthermore, spike-timing-dependent plasticity (STDP), albeit being a spike-based learning rule, updates weights locally and does not optimize for the output error of the network. We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones, from the output error. By incorporating this desire value into the local STDP weight update, we can efficiently capture the neuron dynamics while minimizing the global error and attaining a high classification accuracy. That makes desire backpropagation a spike-based supervised learning rule. We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively. In addition, by eliminating a multiplication during the backward pass, we reduce computational complexity and balance arithmetic resources between forward and backward pass, making desire backpropagation a candidate for training on low-resource devices. •Learning rule considers computational complexity and resource efficiency.•Ternary desire value defined for each neuron indicates if it is desired to spike.•Desire values guide STDP-based weight updates and allows computation of local errors.•Validation accuracy is 98.41% on the MNIST dataset and 87.56% on Fashion-MNIST.
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2023.126773