Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Towards the Effect of Examples on In-Context Learning: A Theoretical Case Study

Pengfei He · Yingqian Cui · Han Xu · Hui Liu · Makoto Yamada · Jiliang Tang · Yue XING

Keywords: [ In-context learning ] [ classification ]

[ ] [ Project Page ]
Sat 14 Dec noon PST — 12:45 p.m. PST

Abstract:

In-context learning (ICL) has emerged as a powerful ability for large language models (LLMs) to adapt to new tasks by leveraging a few (demonstration) examples. Despite its effectiveness, the mechanism behind ICL remains underexplored. This paper uses a Bayesian framework to investigate how ICL integrates pre-training knowledge and examples for binary classification. In particular, we introduce a probabilistic model extending from the Gaussian mixture model to exactly quantify the impact of pre-training knowledge, label frequency, and label noise on the prediction accuracy. Based on our analysis, when the pre-training knowledge contradicts the knowledge in the examples, whether ICL prediction relies more on the pre-training knowledge or the examples depends on the number of examples. In addition, the label frequency and label noise of the examples both affect the accuracy of the ICL prediction, where the minor class has a lower accuracy and how the label error impacts the accuracy is determined by the specific error rate of the two classes. Extensive simulations are conducted to verify the correctness of the theoretical results, and real-data experiments also align with the theoretical insights. Our work reveals the dual role of pre-training knowledge and examples in ICL, offering a deeper understanding of LLMs' behaviors in classification tasks.

Chat is not available.