Skip to yearly menu bar Skip to main content


Poster

Vocal Call Locator Benchmark (VCL'24) for localizing rodent vocalizations from multi-channel audio

Ralph Peterson · Aramis Tanelus · Christopher Ick · Bartul Mimica · Niegil Francis Muttath Joseph · Violet Ivan · Aman Choudhri · Annegret Falkner · Mala Murthy · David Schneider · Dan Sanes · Alex Williams

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Understanding the behavioral and neural dynamics of socially interacting animals is a goal of contemporary neuroscience. Many machine learning based techniques have emerged in recent years to make sense of complex video and neurophysiological data that result from these experiments. However, less focus has been placed on understanding how animals process acoustic information, including social vocalizations. A critical step to bridge this gap is determining the senders and receivers of acoustic information in social interactions. While sound source localization (SSL) is a classic problem in signal processing, existing approaches are limited in their ability to localize animal-generated sounds in standard laboratory environments. Advances in deep learning based algorithms for SSL are likely to help address these limitations, however there are currently no publicly available models, datasets, or benchmarks to systematically evaluate SSL algorithms in the domain of bioacoustics. Here, we present the VCL'24 Dataset: the first large-scale dataset for benchmarking SSL algorithms in rodents. We acquired synchronized video and multi-channel audio recordings of 770,547 sounds with annotated ground truth sources across 9 conditions. The dataset provides benchmarks which evaluate SSL performance on real data, simulated acoustic data, and a mixture of real and simulated data. We intend this benchmark to facilitate knowledge transfer between the neuroscience and acoustic machine learning communities, which have historically had limited overlap.

Live content is unavailable. Log in and register to view live content