Skip to yearly menu bar Skip to main content


Poster

Sequential Harmful Shift Detection Without Labels

Salim I. Amoukou · Tom Bewley · Saumitra Mishra · Freddy Lecue · Daniele Magazzeni · Manuela Veloso

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We introduce a novel approach for detecting distribution shifts that negatively impact the performance of machine learning models in continuous production environments, which requires no access to ground truth data labels. It builds upon the work of Podkopaev and Ramdas [2022], who address scenarios where labels are available for tracking model errors over time. Our solution extends this framework to work in the absence of labels, by substituting true errors with the predictions of a learnt error estimator. We also propose a strategy to leverage these imperfect error predictions to maintain control over false alarms. Experiments show that our method has high power and false alarm control under various distribution shifts, including covariate and label shifts and natural shifts over geography and time.

Live content is unavailable. Log in and register to view live content