September 29, 2021 Machine Learning

Domain Invariant Representation Learning with Domain Density Transformations

  • 51 minutes
  • Anh Tuan Nguyen, Toan Tran, Yarin Gal, Atilim Gunes Baydin

  • NeurIPS 2021
Share

Abstract

Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalize imperfectly to target domains. To tackle this problem, a predominant domain generalization approach is to learn some domain-invariant information for the prediction task, aiming at a good generalization across domains. In this paper, we propose a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains. We next introduce the use of generative adversarial networks to learn such domain transformations in a possible implementation of our method in practice. We demonstrate the effectiveness of our method on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models.

Back to Research
  • 51 minutes
  • Anh Tuan Nguyen, Toan Tran, Yarin Gal, Atilim Gunes Baydin

  • NeurIPS 2021
Share

Related publications