Poster
On $f$-Divergence Principled Domain Adaptation: An Improved Framework
Ziqiao Wang · Yongyi Mao
[
Abstract
]
Fri 13 Dec 4:30 p.m. PST
— 7:30 p.m. PST
Abstract:
Unsupervised domain adaptation (UDA) plays a crucial role in addressing distribution shifts in machine learning. In this work, we improve the theoretical foundations of UDA proposed by Acuna et al. (2021) by refining their $f$-divergence-based discrepancy and additionally introducing a new measure, $f$-domain discrepancy ($f$-DD). By removing the absolute value function and incorporating a scaling parameter, $f$-DD yields novel target error and sample complexity bounds, allowing us to recover previous KL-based results and bridging the gap between algorithms and theory presented in Acuna et al. (2021). Leveraging a localization technique, we also develop a fast-rate generalization bound. Empirical results demonstrate the superior performance of $f$-DD-based domain learning algorithms over previous works in popular UDA benchmarks.
Live content is unavailable. Log in and register to view live content