Loading…

Learning causal representations for robust domain adaptation

Domain adaptation solves the learning problem in a target domain by leveraging the knowledge in a relevant source domain. While remarkable advances have been made, almost all existing domain adaptation methods heavily require large amounts of unlabeled target domain data for learning domain invarian...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2020-11
Main Authors: Yang, Shuai, Yu, Kui, Cao, Fuyuan, Liu, Lin, Wang, Hao, Li, Jiuyong
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Domain adaptation solves the learning problem in a target domain by leveraging the knowledge in a relevant source domain. While remarkable advances have been made, almost all existing domain adaptation methods heavily require large amounts of unlabeled target domain data for learning domain invariant representations to achieve good generalizability on the target domain. In fact, in many real-world applications, target domain data may not always be available. In this paper, we study the cases where at the training phase the target domain data is unavailable and only well-labeled source domain data is available, called robust domain adaptation. To tackle this problem, under the assumption that causal relationships between features and the class variable are robust across domains, we propose a novel Causal AutoEncoder (CAE), which integrates deep autoencoder and causal structure learning into a unified model to learn causal representations only using data from a single source domain. Specifically, a deep autoencoder model is adopted to learn low-dimensional representations, and a causal structure learning model is designed to separate the low-dimensional representations into two groups: causal representations and task-irrelevant representations. Using three real-world datasets the extensive experiments have validated the effectiveness of CAE compared to eleven state-of-the-art methods.
ISSN:2331-8422
DOI:10.48550/arxiv.2011.06317