Skip to yearly menu bar Skip to main content


Poster

Spike-based Neuromorphic Model for Sound Source Localization

Dehao Zhang · Shuai Wang · Ammar Belatreche · Wenjie Wei · Yichen Xiao · Haorui Zheng · Zijian Zhou · Malu Zhang · Yang Yang

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Biological systems possess remarkable sound source localization (SSL) capabilities that are critical for survival in complex environments. This ability arises from the collaboration between the auditory periphery, which encodes sound as precisely timed spikes, and the auditory cortex, which performs spike-based computations. Inspired by these biological mechanisms, we propose a novel neuromorphic SSL model that integrates spike-based neural encoding and computation. The model employs Resonate-and-Fire (RF) neurons with a phase-locking coding (RF-PLC) method to achieve energy-efficient audio processing. The RF-PLC method leverages the resonance properties of RF neurons to efficiently convert audio signals to time-frequency representation and encode interaural time difference (ITD) cues into discriminative spike patterns. In addition, biological adaptations like frequency band selectivity and short-term memory enhance SSL capability in noisy environments. Inspired by these adaptations, we introduce a spike-driven multi-auditory attention (MAA) module that significantly improves both the accuracy and robustness of the proposed SSL model in real-world conditions. Extensive experimentation demonstrates that our model achieves state-of-the-art accuracy outperforming existing approaches. Furthermore, our model shows exceptional noise robustness and maintains high accuracy even at very low signal-to-noise ratios. By mimicking biological hearing, this neuromorphic approach contributes to the development of high-performance and explainable artificial intelligence systems capable of superior performance in real-world environments.

Live content is unavailable. Log in and register to view live content