HetSyn

本文最后更新于:13 分钟前

# 学术生涯总算留下了一点痕迹

咳咳,时隔多年突然想起来自己还有个博客

第一次投稿居然中了!!!本来想写点,提笔时却不知道说什么了,总之就是欢迎大家围观我们的工作

We are pleased to present our work, “HetSyn: Versatile Timescale Integration in Spiking Neural Networks via Heterogeneous Synapses”.

# Motivations & Contributions

Motivations

  • Spiking Neural Networks (SNNs) offer a biologically plausible and energy-efficient computing paradigm characterized by sparse, event-driven signaling and intrinsic temporal processing capabilities.
  • Synaptic heterogeneity, which is widely observed across brain regions and cell types, has been largely overlooked in the design of SNNs, and its computational potential remains under explored.

Synaptic Time Constants Distributions

Contributions

  • We propose HetSyn, the first modeling framework to explicitly explore synaptic heterogeneity in SNNs.
  • We demonstrate that HetSyn serves as a unified and extensible framework, capable of representing a wide range of existing spiking neuron models.
  • We instantiate HetSyn as HetSynLIF and demonstrate its effectiveness across multiple temporal tasks.

# Methods

# Vanilla LIF

The vanilla Leaky Integrate-and-Fire (LIF) neuron, where all inputs share the same membrane time constant, is defined using the following differential equation:

dVdt=VVrestτm+i,jwiδ(ttij)ϑjδ(ttsj)\frac{d V}{d t}=-\frac{V-V_{\text {rest}}}{\tau_{\text{m}}}+\sum_{i, j} w_{i} \cdot \delta\left(t-t_{i}^{j}\right)- \vartheta \cdot \sum_{j} \delta\left(t-t_{\text{s}}^{j}\right)

In practice, computer simulations typically use a discrete-time formulation, and a spike is emitted by a Heaviside step function when the membrane potential exceeds a threshold.

Vt=ρVt1+iwizitϑzt1V^{t}=\rho \cdot V^{t-1}+\sum_{i} w_{i} \cdot z_{i}^{t}- \vartheta \cdot z^{t-1}

zt=H(Vtϑ)z^{t}=\text{H}(V^{t} -\vartheta)

# Equipped with HetSyn

In HetSyn, we replace the uniform time constant with synapse-specific decay factors and model the reset process as a negative current injection. Each synaptic current now evolves independently over time, allowing the neuron to integrate information at multiple timescales:

Vt=iIitJtV^{t} = \sum_{i} I_{i}^{t} - J^{t}

dIitdt=Iitτs,i+jwiδ(ttij)\frac{d I_i^{t}}{d t} =-\frac{I_i^{t}}{\tau_{\text{s}, i}}+\sum_{j} w_{i} \cdot \delta\left(t-t_{i}^{j}\right) \\

dJtdt=JtτJ+jϑδ(ttsj)\frac{d J^{t}}{d t} =-\frac{J^{t}}{\tau_{J}}+\sum_{j} \vartheta \cdot \delta\left(t-t_{\text{s}}^{j}\right)

Iit=riIit1+wizitI_{i}^{t}=r_{i} \cdot I_{i}^{t-1}+w_{i} \cdot z_{i}^{t}

Jt=κJt1+ϑzt1J^{t}=\kappa \cdot J^{t-1}+\vartheta \cdot z^{t-1}

# Generalization

HetSynLIF can generalize into three existing neuron types by adjusting parameters:

  • If all synaptic decay factors rjir_{ji} and the reset current decay factor κj\kappa_j are identical and equal to a shared value ρ\rho, then the HetSynLIF model reduces to the HomNeuLIF model;

rji=κj=ρHomNeuLIFr_{ji} = \kappa_j = \rho \rightarrow \text{HomNeuLIF}

  • Based on this, if we further decompose the reset current into a standard component JϑtJ_{\vartheta}^t and an additional adaptation current JαtJ_{\alpha}^t, then HetSynLIF generalizes to the HomNeuALIF model;

rji=κj=ρ & Vjt=iIjitJϑtJαtHomNeuALIFr_{ji} = \kappa_{j} = \rho \space \text{\&} \space V_j^t = \sum_i I_{ji}^t - J_{\vartheta}^t - J_{\alpha}^t \rightarrow \text{HomNeuALIF}

  • For each post-synaptic neuron jj, if all rjir_{ji} equals to κj\kappa_j equals to ρj\rho_j ,the HetSynLIF model generalizes to the HetNeuLIF model.

    rji=κj=ρjHetNeuLIFr_{ji} = \kappa_j = \rho_j \rightarrow \text{HetNeuLIF}

# Experiments

We demonstrate that HetSynLlF not only improves the performance of SNNs across a variety of tasks, but also exhibits strong robustness to noise, enhanced working memory performance, and efficiency under limited neuron resources.

Performance on Pattern Generation Task

In pattern generation task, HetSynLIF learns long-term temporal dependencies faster and more accurately, and shows strong robustness to noise.

Performance on Pattern Generation Task

Performance on Delayed Match-to-Sample Task

In delayed match-to-sample task, HetSynLIF demonstrates excellent working memory performance, even with limited neuron resources.

Performance on Delayed Match-to-Sample Task

Performance on Speech Recognition

Performance on Speech Recognition

Accuracy Comparison on Four Datasets

Accuracy Comparison on Four Datasets

In addition, the HetSynLIF model demonstrates outstanding performance on the other three datasets as well, consistently outperforming prior methods, highlighting its effectiveness in processing multi-timescale temporal dynamics across both speech and visual recognition tasks.

# Conclusion

In summary, HetSyn brings synaptic heterogeneity into SNN modeling and achieves versatile timescale integration. We believe this framework opens new directions for building more efficient and brain-inspired learning systems.


本博客所有文章除特别声明外,均采用 CC BY-SA 4.0 协议 ,转载请注明出处!