Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

Efficiently Learning at Test-Time: Active Fine-Tuning of LLMs

Jonas Hübotter · Sascha Bongni · Ido Hakimi · Andreas Krause


Abstract:

Recent efforts in fine-tuning language models often rely on automatic data selection, commonly using Nearest Neighbors retrieval from large datasets. However, we theoretically show that this approach tends to select redundant data, limiting its effectiveness or even hurting performance. To address this, we introduce SIFT, a data selection algorithm designed to reduce uncertainty about responding to the prompt, which unifies ideas from retrieval and active learning. SIFT accounts for redundant information and optimizes the overall information gain of the selected examples. Our evaluations, focusing on prompt-specific fine-tuning at test-time, show that SIFT consistently outperforms Nearest Neighbor retrieval in language modeling on the Pile dataset, with minimal computational overhead. Whereas Nearest Neighbor retrieval typically fails in the presence of information duplication, SIFT is entirely robust to such cases.

Chat is not available.