Poster
Stopping Bayesian Optimization with Probabilistic Regret Bounds
James Wilson
[
Abstract
]
Wed 11 Dec 4:30 p.m. PST
— 7:30 p.m. PST
Abstract:
Bayesian optimization is a popular framework for efficiently finding high-quality solutions to difficult problems based on limited information. As a rule, these algorithms operate by iteratively choosing what to try next until some predefined budget has been exhausted. We investigate replacing this de facto stopping rule with criteria based on the probability that a point satisfies a given set of conditions.As a prototypical example, we focus on an $(\epsilon, \delta)$-criterion: stop when a solution has been found whose value is within $\epsilon > 0$ of the optimum with probability at least $1 - \delta$ under the model. For Gaussian process priors, we prove that Bayesian optimization satisfies this criterion under mild technical assumptions. Further, we give a practical algorithm for evaluating stopping rules with Monte Carlo estimators in a manner that is robust to estimation error. These findings are accompanied by empirical results which demonstrate the strengths and weaknesses of this approach.
Live content is unavailable. Log in and register to view live content