Poster
in
Workshop: Time Series in the Age of Large Models
Align and Fine-Tune: Enhancing LLMs for Time-Series Forecasting
Ching Chang · Wei-Yao Wang · Wen-Chih Peng · Tien-Fu Chen · Sagar Samtani
Multivariate time-series forecasting is vital in fields like economic planning and weather prediction, but deep models often require large datasets, limiting their practicality. Pre-trained Large Language Models (LLMs) have been adapted for time-series tasks, but challenges persist due to differences between time-series and linguistic data, and the need for multi-scale temporal processing. To address these challenges, we introduce LLM4TS, a framework that leverages LLMs for time-series forecasting through a two-stage fine-tuning process: time-series alignment to adapt LLMs to time-series data and forecasting fine-tuning for specific tasks. A novel two-level aggregation method integrates multi-scale temporal data within LLMs. Experiments show that LLM4TS outperforms state-of-the-art methods, excelling in both full-shot and few-shot scenarios. Comparisons with other unsupervised approaches highlight LLM4TS's superior representation learning.