Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design

Post-Calibration Techniques: Balancing Calibration and Score Distribution Alignment

Agathe Fernandes Machado · Arthur Charpentier · Emmanuel Flachaire · Ewen Gallic · Francois HU

Keywords: [ calibration ] [ binary scoring classifier ] [ divergence ] [ tree-based methods ] [ recalibration ] [ score heterogeneity ]


Abstract:

A binary scoring classifier can appear well-calibrated according to standard calibration metrics, even when the distribution of scores does not align with the distribution of the true events. In this paper, we investigate the impact of post-processing calibration on the score distribution (sometimes named "recalibration"). Using simulated data, where the true probability is known, followed by real-world datasets with prior knowledge on event distributions, we compare the performance of an XGBoost model before and after applying calibration techniques. The results show that while applying methods such as Platt scaling or isotonic regression can improve the model's calibration, they may also lead to an increase in the divergence between the score distribution and the underlying event probability distribution.

Chat is not available.