ML ICML

LAMDA: Label Matching Deep Domain Adaptation

May 24, 2021

Deep domain adaptation (DDA) approaches have recently been shown to perform better than their shallow rivals with better modeling capacity on complex domains (e.g., image, structural data, and sequential data). The underlying idea is to learn domain invariant representations on a latent space that can bridge the gap between source and target domains. Several theoretical studies have established insightful understanding and the benefit of learning domain invariant features; however, they are usually limited to the case where there is no label shift, hence hindering its applicability. In this paper, we propose and study a new challenging setting that allows us to use a Wasserstein distance (WS) to not only quantify the data shift but also to define the label shift directly. We further develop a theory to demonstrate that minimizing the WS of the data shift leads to closing the gap between the source and target data distributions on the latent space (e.g., an intermediate layer of a deep net), while still being able to quantify the label shift with respect to this latent space. Interestingly, our theory can consequently explain certain drawbacks of learning domain invariant features on the latent space. Finally, grounded on the results and guidance of our developed theory, we propose the Label Matching Deep Domain Adaptation (LAMDA) approach that outperforms baselines on real-world datasets for DA problems.

Overall

2 minutes

Trung Le, Tuan Nguyen, Nhat Ho, Hung Bui, Dinh Phung

ICML 2021

Share Article

Related publications

ML ICLR Top Tier
February 19, 2024

Nguyen Hung-Quang, Yingjie Lao, Tung Pham, Kok-Seng Wong, Khoa D Doan

CV ML AAAI Top Tier
January 8, 2024

Tran Huynh Ngoc, Dang Minh Nguyen, Tung Pham, Anh Tran

ML AAAI Top Tier
January 8, 2024

Viet Nguyen*, Giang Vu*, Tung Nguyen Thanh, Khoat Than, Toan Tran

ML NeurIPS Top Tier
October 4, 2023

Van-Anh Nguyen, Trung Le, Anh Tuan Bui, Thanh-Toan Do, Dinh Phung