Spotlight
A Bandit Learning Algorithm and Applications to Auction Design
Kim Thang Nguyen
Orals & Spotlights: Learning Theory
Abstract:
We consider online bandit learning in which at every time step,
an algorithm has to make a decision and then observe only its reward.
The goal is to design efficient (polynomial-time) algorithms that achieve a total reward approximately
close to that of the best fixed decision in hindsight.
In this paper, we introduce a new notion of $(\lambda,\mu)$-concave functions and present a bandit learning algorithm that
achieves a performance guarantee which is characterized as a function of the concavity parameters $\lambda$ and $\mu$.
The algorithm is based on the mirror descent algorithm in which the update directions follow the gradient of the multilinear extensions of
the reward functions.
The regret bound induced by our algorithm is $\widetilde{O}(\sqrt{T})$ which is nearly optimal.
We apply our algorithm to auction design, specifically to welfare maximization,
revenue maximization, and no-envy learning in auctions.
In welfare maximization, we show that a version of fictitious play
in smooth auctions guarantees a competitive regret bound which is determined by the smooth parameters.
In revenue maximization, we consider the simultaneous second-price auctions with reserve prices
in multi-parameter environments. We give a bandit algorithm which achieves the total revenue at least $1/2$ times
that of the best fixed reserve prices in hindsight.
In no-envy learning, we study the bandit item selection problem where the player valuation is submodular
and provide an efficient $1/2$-approximation no-envy algorithm.
Chat is not available.
Successful Page Load