Skip to yearly menu bar Skip to main content


Poster

UNITS: A Unified Multi-Task Time Series Model

Shanghua Gao · Teddy Koker · Owen Queen · Tom Hartvigsen · Theodoros Tsiligkaridis · Marinka Zitnik

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Advances in time series models are driving a shift from conventional deep learning methods to pre-trained foundational models. While pre-trained transformers and reprogrammed text-based LLMs report state-of-the-art results, the best-performing architectures vary significantly across tasks, and models often have limited scope, such as focusing only on time series forecasting. Models that unify predictive and generative time series tasks under a single framework remain challenging to achieve. We introduce UniTS, a multi-task time series model that uses task tokenization to express predictive and generative tasks within a single model. UniTS leverages a modified transformer block designed to obtain universal time series representations. This design induces transferability from a heterogeneous, multi-domain pre-training dataset—often with diverse dynamic patterns, sampling rates, and temporal scales—to many downstream datasets, which can also be diverse in task specifications and data domains. Across 38 datasets spanning human activity sensors, healthcare, engineering, and finance domains, UniTS model performs favorably against 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models, including repurposed text-based LLMs. UniTS demonstrates effective few-shot and prompt learning capabilities when evaluated on new data domains and tasks. In the conventional single-task setting, UniTS outperforms strong task-specialized time series models.

Live content is unavailable. Log in and register to view live content