Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Time Series in the Age of Large Models

LETS-C: Leveraging Text Embedding for Time Series Classification

Rachneet Kaur · Zhen Zeng · Tucker Balch · Manuela Veloso


Abstract:

Recent advancements in language modeling have shown promising results in time series data analysis, with fine-tuning pre-trained large language models (LLMs) achieving state-of-the-art (SOTA) performance on standard benchmarks. However, LLMs require millions of trainable parameters, presenting a significant drawback due to their large size. We propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a text embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks and multilayer perceptron. We conducted extensive experiments on a well-established time series classification benchmark. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters compared to the SOTA model. Our findings suggest that leveraging text embedding models to encode time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture.

Chat is not available.