Skip to yearly menu bar Skip to main content


Poster

Abstracted Shapes as Tokens - A Generalizable and Interpretable Model for Time-series Classification

Yunshi Wen · Tengfei Ma · Lily Weng · Lam Nguyen · Anak Agung Julius

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In time-series analysis, many recent works seek to provide a unified view and representation for time-series across multiple domains, leading to the development of foundation models for time-series data. Despite diverse modeling techniques, existing models are black-box and fail to provide insights and explanations about their representations. In this paper, we present VQShape, a pre-trained, generalizable, and interpretable model for time-series representation learning and classification. By introducing a novel representation for time-series data, we forge a connection between the latent space of VQShape and shape-level features. Using vector-quantization, we show that time-series from different domains can be described using a unified set of low-dimensional codes where each code can be represented as an abstracted shape in the time-domain. On classification tasks, we show that representations of VQShape can be utilized to build interpretable classifiers, achieving comparable performance as specialist models. Additionally, in zero-shot learning, VQShape and its codebook can generalize to previously unseen datasets and domains that are not included in the pre-training process.

Live content is unavailable. Log in and register to view live content