Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

Are LLMs Prescient? A Continuous Evaluation using Daily News as the Oracle

Amelia Hui Dai · Ryan S Teehan · Mengye Ren


Abstract:

Existing evaluation benchmarks for Large Language Models (LLMs) quickly become outdated due to model updates and an evolving information landscape. Moreover, they often lack the ability to assess how model performance evolves over time, as they consist of static questions without a temporal dimension. To address these, we propose using future event prediction as a continuous evaluation method to assess LLMs' temporal generalization and forecasting abilities. Our benchmark, Daily Oracle, automatically generates question-answer (QA) pairs from daily news, challenging LLMs to predict "future" events based on pre-training data. Our findings reveal that as pre-training data becomes outdated, LLM performance degrades over time. While Retrieval Augmented Generation (RAG) can enhance prediction accuracy, the degradation persists, highlighting the need for ongoing model updates.

Chat is not available.