Skip to yearly menu bar Skip to main content


Spotlight Poster

Generalization Analysis for Label-Specific Representation Learning

Yi-Fan Zhang · Min-Ling Zhang

East Exhibit Hall A-C #4711
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Label-specific representation learning (LSRL), i.e., constructing the representation with specific discriminative properties for each class label, is an effective strategy to improve the performance of multi-label learning. However, the generalization analysis of LSRL is still in its infancy. The existing theory bounds for multi-label learning, which preserve the coupling among different components, are invalid for LSRL. In an attempt to overcome this challenge and make up for the gap in the generalization theory of LSRL, we develop a novel vector-contraction inequality and derive the generalization bound for general function class of LSRL with a weaker dependency on the number of labels than the state of the art. In addition, we derive generalization bounds for typical LSRL methods, and these theoretical results reveal the impact of different label-specific representations on generalization analysis. The mild bounds without strong assumptions explain the good generalization ability of LSRL.

Live content is unavailable. Log in and register to view live content