Ricardo
(2022-05-30 23:39):
#paper https://arxiv.org/abs/2102.04159v3 Deep Residual Learning in Spiking Neural Networks. 2021年发表于NIPS。基于人工神经网络的现代深度学习技术在各个领域上都取得了相当大的进展,但是由于其数学上的黑箱不可解释性、功耗高的问题,有一部分研究开始关注于基于生物脉冲神经元的脉冲神经网络上(SNN)。SNN有较高的生物解释性、事件驱动性和低功耗等特点,被视为人工神经网络的潜在竞争对手。但是SNN仍然面临许多理论和工程问题,在一些复杂任务上的表现仍然比ANN差。基于残差学习在ANN上取得的巨大成功,自然会去研究如何利用残差学习去训练SNN。之前的一些研究仿照ANN中标准的残差模块,简单地将relu激活函数替换成脉冲神经元,但是这样的网络伴随着深度的增加会出现退化问题,从而难以实现残差学习。在这篇论文里,作者证明了之前在SNN上的残差学习方法会导致梯度爆炸/消失问题,从而难以实现identity mapping。因此,他们提出了一个方法用来解决这么一个梯度爆炸/消失问题。实验结果也挺漂亮的,在多个数据集上都比之前的snn方法更好,当然不如ann的结果啦。并且能够通过加深网络深度提高snn的performance。而且,也首次实现了能够直接训练超过100层的snn。
arXiv,
2022.
DOI: 10.48550/arXiv.2102.04159
Deep Residual Learning in Spiking Neural Networks
翻译
Abstract:
Deep Spiking Neural Networks (SNNs) present optimization difficulties for gradient-based approaches due to discrete binary activation and complex spatial-temporal dynamics. Considering the huge success of ResNet in deep learning, it would be natural to train deep SNNs with residual learning. Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons, which suffers the degradation problem and can hardly implement residual learning. In this paper, we propose the spike-element-wise (SEW) ResNet to realize residual learning in deep SNNs. We prove that the SEW ResNet can easily implement identity mapping and overcome the vanishing/exploding gradient problems of Spiking ResNet. We evaluate our SEW ResNet on ImageNet, DVS Gesture, and CIFAR10-DVS datasets, and show that SEW ResNet outperforms the state-of-the-art directly trained SNNs in both accuracy and time-steps. Moreover, SEW ResNet can achieve higher performance by simply adding more layers, providing a simple method to train deep SNNs. To our best knowledge, this is the first time that directly training deep SNNs with more than 100 layers becomes possible.
翻译