Loading…

Source-Free Open Compound Domain Adaptation in Semantic Segmentation

In this work, we introduce a new concept, named source-free open compound domain adaptation (SF-OCDA), and study it in semantic segmentation. SF-OCDA is more challenging than the traditional domain adaptation but it is more practical. It jointly considers (1) the issues of data privacy and data stor...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on circuits and systems for video technology 2022-10, Vol.32 (10), p.7019-7032
Main Authors: Zhao, Yuyang, Zhong, Zhun, Luo, Zhiming, Lee, Gim Hee, Sebe, Nicu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3
cites cdi_FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3
container_end_page 7032
container_issue 10
container_start_page 7019
container_title IEEE transactions on circuits and systems for video technology
container_volume 32
creator Zhao, Yuyang
Zhong, Zhun
Luo, Zhiming
Lee, Gim Hee
Sebe, Nicu
description In this work, we introduce a new concept, named source-free open compound domain adaptation (SF-OCDA), and study it in semantic segmentation. SF-OCDA is more challenging than the traditional domain adaptation but it is more practical. It jointly considers (1) the issues of data privacy and data storage and (2) the scenario of multiple target domains and unseen open domains. In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model. The model is evaluated on the samples from the target and unseen open domains. To solve this problem, we present an effective framework by separating the training process into two stages: (1) pre-training a generalized source model and (2) adapting a target model with self-supervised learning. In our framework, we propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level, which can benefit the training of both stages. First, CPSS can significantly improve the generalization ability of the source model, providing more accurate pseudo-labels for the latter stage. Second, CPSS can reduce the influence of noisy pseudo-labels and also avoid the model overfitting to the target domain during self-supervised learning, consistently boosting the performance on the target and open domains. Experiments demonstrate that our method produces state-of-the-art results on the C-Driving dataset. Furthermore, our model also achieves the leading performance on CityScapes for domain generalization.
doi_str_mv 10.1109/TCSVT.2022.3179021
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_TCSVT_2022_3179021</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9785619</ieee_id><sourcerecordid>2721428513</sourcerecordid><originalsourceid>FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3</originalsourceid><addsrcrecordid>eNo9kE9PwzAMxSMEEmPwBeBSiXNH7DRtcpwKA6RJO6ziGqX5gzrRprTdgW9PRidOtuX37KcfIfdAVwBUPlXl_qNaIUVcMSgkRbggC-BcpIiUX8aeckgFAr8mN-N4oBQykRUL8rwPx8G4dDM4l-x61yVlaPtw7GzyHFrddMna6n7SUxO6JE571-puakxsPlvXzYtbcuX11-juznVJqs1LVb6l293re7nepoYxOaWc5rbWztsaNPPUeO81Qg25Ri60zm3MW5g6rxGtELk0QkKRZ2C1rVnm2ZI8zmf7IXwf3TipQwzfxY8KC4QMBQcWVTirzBDGcXBe9UPT6uFHAVUnWOoPljrBUmdY0fQwmxrn3L9BFoLnINkvJ8JmGQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2721428513</pqid></control><display><type>article</type><title>Source-Free Open Compound Domain Adaptation in Semantic Segmentation</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zhao, Yuyang ; Zhong, Zhun ; Luo, Zhiming ; Lee, Gim Hee ; Sebe, Nicu</creator><creatorcontrib>Zhao, Yuyang ; Zhong, Zhun ; Luo, Zhiming ; Lee, Gim Hee ; Sebe, Nicu</creatorcontrib><description>In this work, we introduce a new concept, named source-free open compound domain adaptation (SF-OCDA), and study it in semantic segmentation. SF-OCDA is more challenging than the traditional domain adaptation but it is more practical. It jointly considers (1) the issues of data privacy and data storage and (2) the scenario of multiple target domains and unseen open domains. In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model. The model is evaluated on the samples from the target and unseen open domains. To solve this problem, we present an effective framework by separating the training process into two stages: (1) pre-training a generalized source model and (2) adapting a target model with self-supervised learning. In our framework, we propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level, which can benefit the training of both stages. First, CPSS can significantly improve the generalization ability of the source model, providing more accurate pseudo-labels for the latter stage. Second, CPSS can reduce the influence of noisy pseudo-labels and also avoid the model overfitting to the target domain during self-supervised learning, consistently boosting the performance on the target and open domains. Experiments demonstrate that our method produces state-of-the-art results on the C-Driving dataset. Furthermore, our model also achieves the leading performance on CityScapes for domain generalization.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3179021</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation ; Adaptation models ; Compounds ; Data models ; Data storage ; Domains ; Image segmentation ; Labels ; open compound domain adaptation ; Semantic segmentation ; Semantics ; source-free domain adaptation ; Supervised learning ; Training ; Transfer learning</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-10, Vol.32 (10), p.7019-7032</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3</citedby><cites>FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3</cites><orcidid>0000-0002-6597-7248 ; 0000-0002-8202-0544 ; 0000-0002-4754-0325 ; 0000-0002-3411-9582</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9785619$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Zhao, Yuyang</creatorcontrib><creatorcontrib>Zhong, Zhun</creatorcontrib><creatorcontrib>Luo, Zhiming</creatorcontrib><creatorcontrib>Lee, Gim Hee</creatorcontrib><creatorcontrib>Sebe, Nicu</creatorcontrib><title>Source-Free Open Compound Domain Adaptation in Semantic Segmentation</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>In this work, we introduce a new concept, named source-free open compound domain adaptation (SF-OCDA), and study it in semantic segmentation. SF-OCDA is more challenging than the traditional domain adaptation but it is more practical. It jointly considers (1) the issues of data privacy and data storage and (2) the scenario of multiple target domains and unseen open domains. In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model. The model is evaluated on the samples from the target and unseen open domains. To solve this problem, we present an effective framework by separating the training process into two stages: (1) pre-training a generalized source model and (2) adapting a target model with self-supervised learning. In our framework, we propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level, which can benefit the training of both stages. First, CPSS can significantly improve the generalization ability of the source model, providing more accurate pseudo-labels for the latter stage. Second, CPSS can reduce the influence of noisy pseudo-labels and also avoid the model overfitting to the target domain during self-supervised learning, consistently boosting the performance on the target and open domains. Experiments demonstrate that our method produces state-of-the-art results on the C-Driving dataset. Furthermore, our model also achieves the leading performance on CityScapes for domain generalization.</description><subject>Adaptation</subject><subject>Adaptation models</subject><subject>Compounds</subject><subject>Data models</subject><subject>Data storage</subject><subject>Domains</subject><subject>Image segmentation</subject><subject>Labels</subject><subject>open compound domain adaptation</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>source-free domain adaptation</subject><subject>Supervised learning</subject><subject>Training</subject><subject>Transfer learning</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kE9PwzAMxSMEEmPwBeBSiXNH7DRtcpwKA6RJO6ziGqX5gzrRprTdgW9PRidOtuX37KcfIfdAVwBUPlXl_qNaIUVcMSgkRbggC-BcpIiUX8aeckgFAr8mN-N4oBQykRUL8rwPx8G4dDM4l-x61yVlaPtw7GzyHFrddMna6n7SUxO6JE571-puakxsPlvXzYtbcuX11-juznVJqs1LVb6l293re7nepoYxOaWc5rbWztsaNPPUeO81Qg25Ri60zm3MW5g6rxGtELk0QkKRZ2C1rVnm2ZI8zmf7IXwf3TipQwzfxY8KC4QMBQcWVTirzBDGcXBe9UPT6uFHAVUnWOoPljrBUmdY0fQwmxrn3L9BFoLnINkvJ8JmGQ</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Zhao, Yuyang</creator><creator>Zhong, Zhun</creator><creator>Luo, Zhiming</creator><creator>Lee, Gim Hee</creator><creator>Sebe, Nicu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-6597-7248</orcidid><orcidid>https://orcid.org/0000-0002-8202-0544</orcidid><orcidid>https://orcid.org/0000-0002-4754-0325</orcidid><orcidid>https://orcid.org/0000-0002-3411-9582</orcidid></search><sort><creationdate>20221001</creationdate><title>Source-Free Open Compound Domain Adaptation in Semantic Segmentation</title><author>Zhao, Yuyang ; Zhong, Zhun ; Luo, Zhiming ; Lee, Gim Hee ; Sebe, Nicu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Adaptation</topic><topic>Adaptation models</topic><topic>Compounds</topic><topic>Data models</topic><topic>Data storage</topic><topic>Domains</topic><topic>Image segmentation</topic><topic>Labels</topic><topic>open compound domain adaptation</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>source-free domain adaptation</topic><topic>Supervised learning</topic><topic>Training</topic><topic>Transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Yuyang</creatorcontrib><creatorcontrib>Zhong, Zhun</creatorcontrib><creatorcontrib>Luo, Zhiming</creatorcontrib><creatorcontrib>Lee, Gim Hee</creatorcontrib><creatorcontrib>Sebe, Nicu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) Online</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Yuyang</au><au>Zhong, Zhun</au><au>Luo, Zhiming</au><au>Lee, Gim Hee</au><au>Sebe, Nicu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Source-Free Open Compound Domain Adaptation in Semantic Segmentation</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-10-01</date><risdate>2022</risdate><volume>32</volume><issue>10</issue><spage>7019</spage><epage>7032</epage><pages>7019-7032</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>In this work, we introduce a new concept, named source-free open compound domain adaptation (SF-OCDA), and study it in semantic segmentation. SF-OCDA is more challenging than the traditional domain adaptation but it is more practical. It jointly considers (1) the issues of data privacy and data storage and (2) the scenario of multiple target domains and unseen open domains. In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model. The model is evaluated on the samples from the target and unseen open domains. To solve this problem, we present an effective framework by separating the training process into two stages: (1) pre-training a generalized source model and (2) adapting a target model with self-supervised learning. In our framework, we propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level, which can benefit the training of both stages. First, CPSS can significantly improve the generalization ability of the source model, providing more accurate pseudo-labels for the latter stage. Second, CPSS can reduce the influence of noisy pseudo-labels and also avoid the model overfitting to the target domain during self-supervised learning, consistently boosting the performance on the target and open domains. Experiments demonstrate that our method produces state-of-the-art results on the C-Driving dataset. Furthermore, our model also achieves the leading performance on CityScapes for domain generalization.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3179021</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-6597-7248</orcidid><orcidid>https://orcid.org/0000-0002-8202-0544</orcidid><orcidid>https://orcid.org/0000-0002-4754-0325</orcidid><orcidid>https://orcid.org/0000-0002-3411-9582</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1051-8215
ispartof IEEE transactions on circuits and systems for video technology, 2022-10, Vol.32 (10), p.7019-7032
issn 1051-8215
1558-2205
language eng
recordid cdi_crossref_primary_10_1109_TCSVT_2022_3179021
source IEEE Electronic Library (IEL) Journals
subjects Adaptation
Adaptation models
Compounds
Data models
Data storage
Domains
Image segmentation
Labels
open compound domain adaptation
Semantic segmentation
Semantics
source-free domain adaptation
Supervised learning
Training
Transfer learning
title Source-Free Open Compound Domain Adaptation in Semantic Segmentation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T00%3A44%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Source-Free%20Open%20Compound%20Domain%20Adaptation%20in%20Semantic%20Segmentation&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Zhao,%20Yuyang&rft.date=2022-10-01&rft.volume=32&rft.issue=10&rft.spage=7019&rft.epage=7032&rft.pages=7019-7032&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3179021&rft_dat=%3Cproquest_cross%3E2721428513%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c339t-506dbaefdb1a3f0cfffa21b16a258aa6d5587cb6b22d8869c8917641dadb34f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2721428513&rft_id=info:pmid/&rft_ieee_id=9785619&rfr_iscdi=true