Poster Session
in
Workshop: Scientific Methods for Understanding Neural Networks
Probing the Decision Boundaries of In-context Learning in Large Language Models Download PDF
Siyan Zhao · Tung Nguyen · Aditya Grover
In-context learning in large language models enables them to generalize to new tasks by prompting with a few exemplars without explicit parameter updates. In this work, we propose a new mechanism to probe and understand in-context learning from the lens of decision boundaries for in-context classification. Decision boundaries qualitatively demonstrate the inductive biases of standard classifiers. Surprisingly, we find that the decision boundaries learned by current LLMs in simple binary classification tasks are irregular and non-smooth. We investigate factors influencing these boundaries and explore methods to enhance their generalizability. Our findings offer insights into in-context learning dynamics and practical improvements for enhancing its robustness and generalizability.