Conditional Value at Risk (CVaR) is a 'coherent risk measure' which generalizes expectation (reduced to a boundary parameter setting).
Widely used in mathematical finance, it is garnering increasing interest in machine learning as an alternate approach to regularization, and as a means for ensuring fairness.
This paper presents a generalization bound for learning algorithms that minimize the CVaR of the empirical loss.
The bound is of PAC-Bayesian type and is guaranteed to be small when the empirical CVaR is small.
We achieve this by reducing the problem of estimating CVaR to that of merely estimating an expectation. This then enables us, as a by-product, to obtain concentration inequalities for CVaR even when the random variable in question is unbounded.