Loading…
LaFea: Learning Latent Representation beyond Feature for Universal Domain Adaptation
Universal Domain Adaptation (UniDA) is a recent advent problem that aims to transfer the knowledge from the source domain to the target domain without any prior knowledge on label sets. The main challenge is to separate common samples from private samples in the target domain. In general, existing m...
Saved in:
Published in: | IEEE transactions on circuits and systems for video technology 2023-11, Vol.33 (11), p.1-1 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Universal Domain Adaptation (UniDA) is a recent advent problem that aims to transfer the knowledge from the source domain to the target domain without any prior knowledge on label sets. The main challenge is to separate common samples from private samples in the target domain. In general, existing methods achieve this goal by performing domain adaptation only on the features extracted by the backbone networks. However, solely relying on the learning of the backbone network may not fully exploit the effectiveness of features, due to that 1) the discrepancy between two domains can naturally distract the learning of backbone network and 2) the irrelevant content of samples ( e.g ., backgrounds) likely goes through the backbone network, and accordingly may hinder the learning of domain-informative features. To this end, we describe a new method to provide extra guidance to the learning of the backbone network based on the latent representation beyond features (LaFea). We are motivated by the fact that the latent representation can be learned to contain the domain-relevant information scattered in features, and the learning of this latent representation can naturally promote the effectiveness of corresponding features in return. To achieve this goal, we develop a simple GAN-style architecture to transform features into the latent representation and propose new objectives to adversarially learn this representation. It should be noted that the latent representation only serves as an auxiliary in training, but it is not needed in inference. Extensive experiments on four datasets corroborate the superiority of our method compared to the state-of-the-arts. |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2023.3267765 |