Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability

Towards Long-Context Time Series Foundation Models With A Handful Of Additional Parameters

Nina Żukowska · Mononito Goswami · Michal Wilinski · Willa Potosnak · Artur Dubrawski


Abstract:

Time series foundation models have shown impressive performance on a variety of tasks, across a wide range of domains, even in zero-shot settings. However, most of these models are designed to handle short univariate time series as an input. This limits their practical use, especially in domains such as healthcare with copious amounts of long and multivariate data with strong temporal and intra-variate dependencies. Our study bridges this gap by cataloging and systematically comparing various context expansion techniques from both language and time series domains, and introducing a novel compressive memory mechanism to allow encoder-only TSFMs to effectively model intra-variate dependencies, doing this by introducing a handful of additional learnable parameters. We demonstrate the benefits of our approach by imbuing MOMENT, a recent family of multi-task time series foundation models with the multivariate context. We test our approach in a supervised approach and also by pertaining and fine-tuning a linear head together with method-specific parameters.

Chat is not available.