Spotlight
in
Workshop: Time Series in the Age of Large Models
TimeSeriesExam: A Time Series Understanding Exam
Yifu Cai · Arjun Choudhry · Mononito Goswami · Artur Dubrawski
Abstract:
Large Language Models (LLMs) have recently demonstrated a remarkable ability to model time series data. These capabilities can be partly explained if LLMs understand basic time series concepts. However, our knowledge of what these models understand about time series data remains relatively limited. To address this gap, we introduce $\texttt{TimeSeriesExam}$, a configurable and scalable multiple-choice question exam designed to assess LLMs across five core time series understanding categories: $\textit{pattern recognition}$, $\textit{noise understanding}$, $\textit{similarity analysis}$, $\textit{anomaly detection}$, and $\textit{causality analysis}$. $\texttt{TimeSeriesExam}$ comprises of over 700 questions, procedurally generated using 104 carefully curated templates and iteratively refined to balance difficulty and their ability to discriminate good from bad models. We test 7 state-of-the-art LLMs on the $\texttt{TimeSeriesExam}$ and provide the first comprehensive evaluation of their time series understanding abilities. Our results suggest that closed-source models such as $\texttt{GPT-4}$ and $\texttt{Gemini}$ understand simple time series concepts significantly better than their open-source counterparts, while all models struggle with complex concepts such as causality analysis. We believe that the ability to programatically generate questions is fundamental to assessing and improving LLM's ability to understand and reason about time series data.
Chat is not available.