Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Time Series in the Age of Large Models

Joint Embedding go Temporal

Sofiane ENNADIR · Siavash Golkar · Leopoldo Sarra


Abstract:

Self-supervised learning algorithms have seen great success recently, enabling breakthroughs in natural language and image processing. However, these methods often rely on autoregressive and masked modeling which aim to reproduce masked information in the input and are therefore susceptible to the presence of noise or confounding variables. To address this problem, Joint-Embedding Predictive Architectures (JEPA) was introduced with the aim to perform self-supervised learning in the latent space. To leverage these advancements in the domain of time series analysis, we introduce Time Series JEPA (TS-JEPA), an architecture specifically adapted for time series representation learning. We validate TS-JEPA on both classification and forecasting, showing that it can match or surpass current state-of-the-art baselines on different standard datasets. Notably, our approach offers an excellent performance trade-off across different tasks, addressing limitations observed in some existing methods. This research lays the groundwork for developing future time series foundation models based on Joint Embedding.

Chat is not available.