Skip to yearly menu bar Skip to main content


Poster

Practical Two-Step Lookahead Bayesian Optimization

Jian Wu · Peter Frazier

East Exhibition Hall B, C #33

Keywords: [ Probabilistic Methods ] [ Gaussian Processes ] [ Bayesian Theory ] [ Algorithms -> AutoML; Optimization -> Non-Convex Optimization; Probabilistic Methods; Probabilistic Methods ]


Abstract:

Expected improvement and other acquisition functions widely used in Bayesian optimization use a "one-step" assumption: they value objective function evaluations assuming no future evaluations will be performed. Because we usually evaluate over multiple steps, this assumption may leave substantial room for improvement. Existing theory gives acquisition functions looking multiple steps in the future but calculating them requires solving a high-dimensional continuous-state continuous-action Markov decision process (MDP). Fast exact solutions of this MDP remain out of reach of today's methods. As a result, previous two- and multi-step lookahead Bayesian optimization algorithms are either too expensive to implement in most practical settings or resort to heuristics that may fail to fully realize the promise of two-step lookahead. This paper proposes a computationally efficient algorithm that provides an accurate solution to the two-step lookahead Bayesian optimization problem in seconds to at most several minutes of computation per batch of evaluations. The resulting acquisition function provides increased query efficiency and robustness compared with previous two- and multi-step lookahead methods in both single-threaded and batch experiments. This unlocks the value of two-step lookahead in practice. We demonstrate the value of our algorithm with extensive experiments on synthetic test functions and real-world problems.

Live content is unavailable. Log in and register to view live content