Skip to yearly menu bar Skip to main content


Spotlight Poster

Are Language Models Actually Useful for Time Series Forecasting?

Mingtian Tan · Mike Merrill · Vinayak Gupta · Tim Althoff · Tom Hartvigsen

East Exhibit Hall A-C #3911
[ ] [ Project Page ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Large language models (LLMs) are being applied to time series forecasting. But are language models actually useful for time series? In a series of ablation studies on three recent and popular LLM-based time series forecasting methods, we find that removing the LLM component or replacing it with a basic attention layer does not degrade forecasting performance---in most cases, the results even improve! We also find that despite their significant computational cost, pretrained LLMs do no better than models trained from scratch, do not represent the sequential dependencies in time series, and do not assist in few-shot settings. Additionally, we explore time series encoders and find that patching and attention structures perform similarly to LLM-based forecasters. All resources needed to reproduce our work are available: https://github.com/BennyTMT/LLMsForTimeSeries.

Live content is unavailable. Log in and register to view live content