Skip to yearly menu bar Skip to main content


Spotlight Talk
in
Workshop: Foundation Model Interventions

Probing the Decision Boundaries of In-context Learning in Large Language Models

Siyan Zhao

[ ]
Sun 15 Dec 10:09 a.m. PST — 10:15 a.m. PST

Abstract:

In-context learning in large language models enables them to generalize to new tasks by prompting with a few exemplars without explicit parameter updates. In this work, we propose a new mechanism to probe and understand in-context learning from the lens of decision boundaries for in-context classification. Decision boundaries qualitatively demonstrate the inductive biases of standard classifiers. Surprisingly, we find that the decision boundaries learned by current LLMs in simple binary classification tasks are irregular and non-smooth. We investigate factors influencing these boundaries and explore methods to enhance their generalizability. Our findings offer insights into in-context learning dynamics and practical improvements for enhancing its robustness and generalizability.

Chat is not available.