Skip to yearly menu bar Skip to main content


Poster

Generalization Bounds for (Wasserstein) Robust Optimization

Yang An · Rui Gao

Keywords: [ Machine Learning ] [ Optimization ] [ Robustness ]


Abstract:

(Distributionally) robust optimization has gained momentum in machine learning community recently, due to its promising applications in developing generalizable learning paradigms. In this paper, we derive generalization bounds for robust optimization and Wasserstein robust optimization for Lipschitz and piecewise Hölder smooth loss functions under both stochastic and adversarial setting, assuming that the underlying data distribution satisfies transportation-information inequalities. The proofs are built on new generalization bounds for variation regularization (such as Lipschitz or gradient regularization) and its connection with robustness.

Chat is not available.