Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning

metaTextGrad: Learning to learn with language models as optimizers

Guowei Xu · Mert Yuksekgonul · Carlos Guestrin · James Zou


Abstract:

Large language models (LLMs) are increasingly used in learning algorithms, evaluations, and optimization tasks. Recent studies have shown that incorporating self-criticism into LLMs can significantly enhance model performance, with frameworks such as TextGrad illustrating this approach by iteratively refining model outputs through prompting. However, these frameworks often require extensive hand-crafting and are sensitive to instruction wording. To mitigate these challenges, we propose metaTextGrad, a meta-learning approach for LLM-based optimizers, focusing on learning loss functions and templates for inference-time optimization. Our method significantly improves performance across multiple benchmarks, achieving 5-27% gains on question-answering tasks. These results demonstrate the potential of meta-learning to enhance LLM-based systems, reducing manual tuning and improving generalizability.

Chat is not available.