Skip to yearly menu bar Skip to main content


Poster

Non-Asymptotic Pure Exploration by Solving Games

Rémy Degenne · Wouter Koolen · Pierre Ménard

East Exhibition Hall B, C #17

Keywords: [ Bandit Algorithms ] [ Algorithms ] [ Exploration ] [ Reinforcement Learning and Planning ]


Abstract:

Pure exploration (aka active testing) is the fundamental task of sequentially gathering information to answer a query about a stochastic environment. Good algorithms make few mistakes and take few samples. Lower bounds (for multi-armed bandit models with arms in an exponential family) reveal that the sample complexity is determined by the solution to an optimisation problem. The existing state of the art algorithms achieve asymptotic optimality by solving a plug-in estimate of that optimisation problem at each step. We interpret the optimisation problem as an unknown game, and propose sampling rules based on iterative strategies to estimate and converge to its saddle point. We apply no-regret learners to obtain the first finite confidence guarantees that are adapted to the exponential family and which apply to any pure exploration query and bandit structure. Moreover, our algorithms only use a best response oracle instead of fully solving the optimisation problem.

Live content is unavailable. Log in and register to view live content