Skip to yearly menu bar Skip to main content


Poster

Sample-Efficient Agnostic Boosting

Udaya Ghai · Karan Singh

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The theory of boosting provides a computational framework for aggregating approximate weak learning algorithms, whose performance is marginally better than that of a random predictor, into an accurate strong learner. In the realizable case, the success of the boosting approach is underscored by a remarkable fact that the resultant sample complexity matches that of a computationally demanding alternative, namely Empirical Risk Minimization (ERM). This in particular implies that the realizable boosting methodology has the potential to offer computational relief, without compromising on sample efficiency.Despite recent progress, in agnostic boosting, where joint distributional assumptions on the labels and feature descriptions are absent, ERM outstrips the agnostic boosting methodology in being quadratically more sample efficient than the all known agnostic boosting algorithms. In this paper, we make progress on closing this gap, and give a substantially more sample efficient agnostic boosting algorithm that those known, without compromising on the computational (or oracle) complexity. A key feature of our algorithm is that it leverages the ability to reuse samples across multiple rounds of boosting, while guaranteeing a generalization error strictly better than those obtained by blackbox applications of uniform convergence arguments. We also apply our approach to other previously studied learning problems, including boosting for reinforcement learning, and demonstrate improved results.

Live content is unavailable. Log in and register to view live content