ML NeurIPS

Domain Invariant Representation Learning with Domain Density Transformations

September 29, 2021

Domain generalization refers to the problem where we aim to train a model on data from a set of source domains so that the model can generalize to unseen target domains. Naively training a model on the aggregate set of data (pooled from all source domains) has been shown to perform suboptimally, since the information learned by that model might be domain-specific and generalize imperfectly to target domains. To tackle this problem, a predominant domain generalization approach is to learn some domain-invariant information for the prediction task, aiming at a good generalization across domains. In this paper, we propose a theoretically grounded method to learn a domain-invariant representation by enforcing the representation network to be invariant under all transformation functions among domains. We next introduce the use of generative adversarial networks to learn such domain transformations in a possible implementation of our method in practice. We demonstrate the effectiveness of our method on several widely used datasets for the domain generalization problem, on all of which we achieve competitive results with state-of-the-art models.

Overall

< 1 minute

Anh Tuan Nguyen, Toan Tran, Yarin Gal, Atilim Gunes Baydin

NeurIPS 2021

Share Article

Related publications

ML ICLR Top Tier
February 19, 2024

Nguyen Hung-Quang, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D Doan

CV ML AAAI Top Tier
January 8, 2024

Tran Huynh Ngoc, Dang Minh Nguyen, Tung Pham, Anh Tran

ML AAAI Top Tier
January 8, 2024

Viet Nguyen*, Giang Vu*, Tung Nguyen Thanh, Khoat Than, Toan Tran

ML NeurIPS Top Tier
October 4, 2023

Van-Anh Nguyen, Trung Le, Anh Tuan Bui, Thanh-Toan Do, Dinh Phung