Skip to yearly menu bar Skip to main content


Poster

Launch and Iterate: Reducing Prediction Churn

Mahdi Milani Fard · Quentin Cormier · Kevin Canini · Maya Gupta

Area 5+6+7+8 #15

Keywords: [ (Other) Probabilistic Models and Methods ] [ MCMC ] [ (Other) Regression ] [ (Other) Applications ] [ (Other) Machine Learning Topics ]


Abstract:

Practical applications of machine learning often involve successive training iterations with changes to features and training examples. Ideally, changes in the output of any new model should only be improvements (wins) over the previous iteration, but in practice the predictions may change neutrally for many examples, resulting in extra net-zero wins and losses, referred to as unnecessary churn. These changes in the predictions are problematic for usability for some applications, and make it harder and more expensive to measure if a change is statistically significant positive. In this paper, we formulate the problem and present a stabilization operator to regularize a classifier towards a previous classifier. We use a Markov chain Monte Carlo stabilization operator to produce a model with more consistent predictions without adversely affecting accuracy. We investigate the properties of the proposal with theoretical analysis. Experiments on benchmark datasets for different classification algorithms demonstrate the method and the resulting reduction in churn.

Live content is unavailable. Log in and register to view live content