Poster
Efficient online algorithms for fast-rate regret bounds under sparsity
Pierre Gaillard · Olivier Wintenberger
Room 517 AB #141
Keywords: [ Learning Theory ] [ Online Learning ] [ Convex Optimization ] [ Sparsity and Compressed Sensing ]
[
Abstract
]
Abstract:
We consider the problem of online convex optimization in two different settings: arbitrary and i.i.d. sequence of convex loss functions. In both settings, we provide efficient algorithms whose cumulative excess risks are controlled with fast-rate sparse bounds.
First, the excess risks bounds depend on the sparsity of the objective rather than on the dimension of the parameters space. Second, their rates are faster than the slow-rate $1/\sqrt{T}$ under additional convexity assumptions on the loss functions. In the adversarial setting, we develop an algorithm BOA+ whose cumulative excess risks is controlled by several bounds with different trade-offs between sparsity and rate for strongly convex loss functions. In the i.i.d. setting under the Ćojasiewicz's assumption, we establish new risk bounds that are sparse with a rate adaptive to the convexity of the risk (ranging from a rate $1/\sqrt{T}$ for general convex risk to $1/T$ for strongly convex risk). These results generalize previous works on sparse online learning under weak assumptions on the risk.
Live content is unavailable. Log in and register to view live content