Poster
Revisiting $(\epsilon, \gamma, \tau)$-similarity learning for domain adaptation
Sofiane Dhouib · Ievgen Redko
Room 517 AB #152
Keywords: [ Learning Theory ] [ Similarity and Distance Learning ]
[
Abstract
]
Abstract:
Similarity learning is an active research area in machine learning that tackles the problem of finding a similarity function tailored to an observable data sample in order to achieve efficient classification. This learning scenario has been generally formalized by the means of a $(\epsilon, \gamma, \tau)-$good similarity learning framework in the context of supervised classification and has been shown to have strong theoretical guarantees. In this paper, we propose to extend the theoretical analysis of similarity learning to the domain adaptation setting, a particular situation occurring when the similarity is learned and then deployed on samples following different probability distributions. We give a new definition of an $(\epsilon, \gamma)-$good similarity for domain adaptation and prove several results quantifying the performance of a similarity function on a target domain after it has been trained on a source domain. We particularly show that if the source distribution dominates the target one, then principally new domain adaptation learning bounds can be proved.
Live content is unavailable. Log in and register to view live content