Skip to yearly menu bar Skip to main content


Poster

Achievable distributional robustness when the robust risk is only partially identified

Julia Kostin · Nicola Gnecco · Fanny Yang

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In safety-critical applications, machine learning models should generalize well under worst-case distribution shifts, that is, have a small robust risk. Invariance-based algorithms can provably take advantage of structural assumptions on the shifts when the training distributions are heterogeneous enough to identify the robust risk. However, in practice, such identifiability is often not given, and hence these methods lack guarantees. To evaluate distributional generalization in such scenarios, we propose to study the more general framework of partially identifiable robustness. This paper introduces this framework in the context of linear structural causal models for concreteness. Further, we define a new risk measure, the identifiable robust risk, and its corresponding (population) minimax quantity that is an algorithm-independent measure for the best achievable robustness under partial identifiability. We use this quantity to show how previous approaches provably achieve suboptimal robustness in the partially identifiable case. Finally, we demonstrate how the empirical minimizer of the identifiable robust risk also outperforms existing methods in finite-sample experiments.

Live content is unavailable. Log in and register to view live content