Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Time Series in the Age of Large Models

Stochastic Sparse Sampling: A Framework for Local Explainability in Variable-Length Medical Time Series

Xavier Mootoo · ALAN DIAZ-MONTIEL · Milad Lankarany · Hina Tabassum


Abstract:

While the majority of time series classification research has focused on modeling fixed-length sequences, variable-length time series classification (VTSC) remains underexplored, despite its relevance in healthcare and various other real-world applications. Existing finite-context methods, such as Transformer-based architectures, require noisy input processing when applied to VTSC, while infinite-context methods, including recurrent neural networks, struggle with information overload over longer sequences. Furthermore, current state-of-the-art (SOTA) methods lack explainability and generally fail to provide insights for local signal regions, reducing their reliability in high-risk scenarios. To address these issues, we introduce Stochastic Sparse Sampling (SSS), a novel framework for explainable VTSC. SSS manages variable-length sequences by sparsely sampling fixed windows to compute localized predictions, which are then aggregated to form a final prediction. We apply SSS on the task of seizure onset zone (SOZ) localization, a critical VTSC problem requiring identification of seizure-inducing brain regions from variable-length electrophysiological time series. We evaluate SSS on the Epilepsy iEEG Multicenter Dataset, a heteregeneous collection of intracranial electroencephalography (iEEG) recordings, and achieve performance comparable to current SOTA methods, while enabling localized visual analysis of model predictions.

Chat is not available.