Skip to yearly menu bar Skip to main content


Poster

Learning with Fitzpatrick Losses

Seta Rakotomandimby · Jean-Philippe Chancelier · Michel De Lara · Mathieu Blondel

West Ballroom A-D #5906
[ ]
[ Paper [ Poster [ OpenReview
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Fenchel-Young losses are a family of loss functions, encompassing the squared,logistic and sparsemax losses, among others. They are convex w.r.t. the modeloutput and the target, separately. Each Fenchel-Young loss is implicitly associatedwith a link function, that maps model outputs to predictions. For instance, thelogistic loss is associated with the soft argmax link function. Can we build newloss functions associated with the same link function as Fenchel-Young losses?In this paper, we introduce Fitzpatrick losses, a new family of separately convexloss functions based on the Fitzpatrick function. A well-known theoretical tool inmaximal monotone operator theory, the Fitzpatrick function naturally leads to arefined Fenchel-Young inequality, making Fitzpatrick losses tighter than Fenchel-Young losses, while maintaining the same link function for prediction. As anexample, we introduce the Fitzpatrick logistic loss and the Fitzpatrick sparsemaxloss, counterparts of the logistic and the sparsemax losses. This yields two newtighter losses associated with the soft argmax and the sparse argmax, two of themost ubiquitous output layers used in machine learning. We study in details theproperties of Fitzpatrick losses and, in particular, we show that they can be seen asFenchel-Young losses using a modified, target-dependent generating function. Wedemonstrate the effectiveness of Fitzpatrick losses for label proportion estimation.

Chat is not available.