Loading…
Intrinsic plasticity coding improved spiking actor network for reinforcement learning
Deep reinforcement learning (DRL) exploits the powerful representational capabilities of deep neural networks (DNNs) and has achieved significant success. However, compared to DNNs, spiking neural networks (SNNs), which operate on binary signals, more closely resemble the biological characteristics...
Saved in:
Published in: | Neural networks 2025-04, Vol.184, p.107054, Article 107054 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep reinforcement learning (DRL) exploits the powerful representational capabilities of deep neural networks (DNNs) and has achieved significant success. However, compared to DNNs, spiking neural networks (SNNs), which operate on binary signals, more closely resemble the biological characteristics of efficient learning observed in the brain. In SNNs, spiking neurons exhibit complex dynamic characteristics and learn based on principles of biological plasticity. Inspired by the brain’s efficient computational mechanisms, information encoding plays a critical role in these networks. We propose an intrinsic plasticity coding improved spiking actor network (IP-SAN) for RL to achieve effective decision-making. The IP-SAN integrates adaptive population coding at the network level with dynamic spiking neuron coding at the neuron level, improving spatiotemporal state representation and promoting more accurate biological simulation. Experimental results show that our IP-SAN outperforms several state-of-the-art methods in five continuous control tasks. |
---|---|
ISSN: | 0893-6080 1879-2782 1879-2782 |
DOI: | 10.1016/j.neunet.2024.107054 |