Loading…

FFM: Injecting Out-of-Domain Knowledge via Factorized Frequency Modification

This work investigates the Single Domain Generalization (SDG) problem and aims to generalize a model from a single source (i.e., training) domain to multiple target (i.e., test) domains coming from different distributions. Most of the existing SDG approaches focus on generating out-of-domain samples...

Full description

Saved in:
Bibliographic Details
Main Authors: Wang, Zijian, Luo, Yadan, Huang, Zi, Baktashmotlagh, Mahsa
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This work investigates the Single Domain Generalization (SDG) problem and aims to generalize a model from a single source (i.e., training) domain to multiple target (i.e., test) domains coming from different distributions. Most of the existing SDG approaches focus on generating out-of-domain samples by either transforming the source images into different styles or optimizing adversarial noise perturbations applied on the source images. In this paper, we show that generating images with diverse styles can be complementary to creating hard samples when handling the SDG task, and propose our approach of Factorized Frequency Modification (FFM) to fulfill this requirement. Specifically, we design a unified framework consisting of a style transformation module, an adversarial perturbation module, and a dynamic frequency selection module. We seamlessly equip the framework with iterative adversarial training that facilitates learning discriminative features from hard and diverse augmented samples. Extensive experiments are performed on four image recognition benchmark datasets of Digits, CIFAR-10-C, CIFAR-100-C, and PACS, which demonstrates that our method outperforms existing state-of-the-art approaches.
ISSN:2642-9381
DOI:10.1109/WACV56688.2023.00412