Poster
in
Workshop: Foundation Models for Decision Making
Using Large Language Models for Hyperparameter Optimization
Michael Zhang · Nishkrit Desai · Juhan Bae · Jonathan Lorraine · Jimmy Ba
This paper studies the use of foundational large language models (LLMs) to make decisions during hyperparameter optimization (HPO).Our primary objective is to understand the performance and limitations of using LLMs for HPO.We study the behavior of LLMs when optimizing simple 2-dimensional landscapes to evaluate the LLM search algorithm and chain-of-thought reasoning.We look at classical setups, with pre-specified search spaces over a small number of hyperparameters, showing an improved initial search phase compared to existing methods.Furthermore, we propose to treat the code specifying our model as a hyperparameter, which the LLM outputs, going beyond the capabilities of existing HPO methods.Our contributions shed light on a new application of using foundational LLMs on the traditional decision making problem of hyperparameter optimization.