Skip to yearly menu bar Skip to main content


Poster

Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?

Petar Stojanov · Zijian Li · Mingming Gong · Ruichu Cai · Jaime Carbonell · Kun Zhang

Keywords: [ Representation Learning ] [ Deep Learning ] [ Adversarial Robustness and Security ] [ Machine Learning ] [ Transfer Learning ] [ Domain Adaptation ]


Abstract: Unsupervised domain adaptation, as a prevalent transfer learning setting, spans many real-world applications. With the increasing representational power and applicability of neural networks, state-of-the-art domain adaptation methods make use of deep architectures to map the input features $X$ to a latent representation $Z$ that has the same marginal distribution across domains. This has been shown to be insufficient for generating optimal representation for classification, and to find conditionally invariant representations, usually strong assumptions are needed. We provide reasoning why when the supports of the source and target data from overlap, any map of $X$ that is fixed across domains may not be suitable for domain adaptation via invariant features. Furthermore, we develop an efficient technique in which the optimal map from $X$ to $Z$ also takes domain-specific information as input, in addition to the features $X$. By using the property of minimal changes of causal mechanisms across domains, our model also takes into account the domain-specific information to ensure that the latent representation $Z$ does not discard valuable information about $Y$. We demonstrate the efficacy of our method via synthetic and real-world data experiments. The code is available at: \texttt{https://github.com/DMIRLAB-Group/DSAN}.

Chat is not available.