Skip to yearly menu bar Skip to main content


Poster

How many classifiers do we need?

Hyunsuk Kim · Liam Hodgkinson · Ryan Theisen · Michael Mahoney

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: As performance gains through scaling data or model size experience diminishing returns, it is becoming increasingly popular to turn to ensembling, where the predictions of multiple models are combined to improve accuracy. We focus on majority vote strategies in classification tasks, and take a deep dive into how the disagreement and the polarity, which we define in this paper, among agents relates to the performance gain achieved by aggregating individual agents. This paper addresses this in the following ways. 1) We define a quantity, $\eta$, that represents the polarity among agents within a dataset, and show both empirically and theoretically that this quantity is nearly constant for a dataset regardless of hyperparameters or architectures of classifiers. 2) We present a tight upper bound for the error of majority vote under restricted entropy conditions. This bound indicates that the disagreement is linearly correlated with the target, and the slope is linear in polarity, $\eta$. 3) We prove asymptotic behavior of disagreement in terms of the number of agents, which can help predicting the performance for a larger number of agents from that of a smaller number. Our theories and claims are supported by experiments on several image classification tasks with various types of neural networks.

Live content is unavailable. Log in and register to view live content