Loading…
Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing
The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we...
Saved in:
Published in: | arXiv.org 2023-01 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Cho, Kyusik Lee, Suhyeon Hongje Seong Kim, Euntai |
description | The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we design CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target domain features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER to Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq to Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins. The code is available at: https://github.com/kyusik-cho/CMOM. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2732904093</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2732904093</sourcerecordid><originalsourceid>FETCH-proquest_journals_27329040933</originalsourceid><addsrcrecordid>eNqNi8EKgkAURYcgSMp_eNBamGY0cxlWtJGIoq1MOsmTnGfOKH1-LvyAVvce7rkz5gkpN8EuFGLBfGtrzrnYxiKKpMeuB2oUGtiXqnU4aHhgqQluulHGYTGWqtHGKYdkYEAFaUfWBtMrowFNBZdnrQsHGX5HWrH5S72t9qdcsvXpeE_PQdvRp9fW5TX1nRmnXMRSJDzkiZT_WT9vKD7l</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2732904093</pqid></control><display><type>article</type><title>Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing</title><source>Publicly Available Content Database</source><creator>Cho, Kyusik ; Lee, Suhyeon ; Hongje Seong ; Kim, Euntai</creator><creatorcontrib>Cho, Kyusik ; Lee, Suhyeon ; Hongje Seong ; Kim, Euntai</creatorcontrib><description>The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we design CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target domain features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER to Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq to Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins. The code is available at: https://github.com/kyusik-cho/CMOM.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptation ; Benchmarks ; Bias ; Context ; Domains ; Image segmentation ; Object motion ; Pastes ; Semantic segmentation</subject><ispartof>arXiv.org, 2023-01</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2732904093?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Cho, Kyusik</creatorcontrib><creatorcontrib>Lee, Suhyeon</creatorcontrib><creatorcontrib>Hongje Seong</creatorcontrib><creatorcontrib>Kim, Euntai</creatorcontrib><title>Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing</title><title>arXiv.org</title><description>The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we design CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target domain features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER to Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq to Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins. The code is available at: https://github.com/kyusik-cho/CMOM.</description><subject>Adaptation</subject><subject>Benchmarks</subject><subject>Bias</subject><subject>Context</subject><subject>Domains</subject><subject>Image segmentation</subject><subject>Object motion</subject><subject>Pastes</subject><subject>Semantic segmentation</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNi8EKgkAURYcgSMp_eNBamGY0cxlWtJGIoq1MOsmTnGfOKH1-LvyAVvce7rkz5gkpN8EuFGLBfGtrzrnYxiKKpMeuB2oUGtiXqnU4aHhgqQluulHGYTGWqtHGKYdkYEAFaUfWBtMrowFNBZdnrQsHGX5HWrH5S72t9qdcsvXpeE_PQdvRp9fW5TX1nRmnXMRSJDzkiZT_WT9vKD7l</recordid><startdate>20230127</startdate><enddate>20230127</enddate><creator>Cho, Kyusik</creator><creator>Lee, Suhyeon</creator><creator>Hongje Seong</creator><creator>Kim, Euntai</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230127</creationdate><title>Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing</title><author>Cho, Kyusik ; Lee, Suhyeon ; Hongje Seong ; Kim, Euntai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27329040933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Benchmarks</topic><topic>Bias</topic><topic>Context</topic><topic>Domains</topic><topic>Image segmentation</topic><topic>Object motion</topic><topic>Pastes</topic><topic>Semantic segmentation</topic><toplevel>online_resources</toplevel><creatorcontrib>Cho, Kyusik</creatorcontrib><creatorcontrib>Lee, Suhyeon</creatorcontrib><creatorcontrib>Hongje Seong</creatorcontrib><creatorcontrib>Kim, Euntai</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cho, Kyusik</au><au>Lee, Suhyeon</au><au>Hongje Seong</au><au>Kim, Euntai</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing</atitle><jtitle>arXiv.org</jtitle><date>2023-01-27</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The network trained for domain adaptation is prone to bias toward the easy-to-transfer classes. Since the ground truth label on the target domain is unavailable during training, the bias problem leads to skewed predictions, forgetting to predict hard-to-transfer classes. To address this problem, we propose Cross-domain Moving Object Mixing (CMOM) that cuts several objects, including hard-to-transfer classes, in the source domain video clip and pastes them into the target domain video clip. Unlike image-level domain adaptation, the temporal context should be maintained to mix moving objects in two different videos. Therefore, we design CMOM to mix with consecutive video frames, so that unrealistic movements are not occurring. We additionally propose Feature Alignment with Temporal Context (FATC) to enhance target domain feature discriminability. FATC exploits the robust source domain features, which are trained with ground truth labels, to learn discriminative target domain features in an unsupervised manner by filtering unreliable predictions with temporal consensus. We demonstrate the effectiveness of the proposed approaches through extensive experiments. In particular, our model reaches mIoU of 53.81% on VIPER to Cityscapes-Seq benchmark and mIoU of 56.31% on SYNTHIA-Seq to Cityscapes-Seq benchmark, surpassing the state-of-the-art methods by large margins. The code is available at: https://github.com/kyusik-cho/CMOM.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-01 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2732904093 |
source | Publicly Available Content Database |
subjects | Adaptation Benchmarks Bias Context Domains Image segmentation Object motion Pastes Semantic segmentation |
title | Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T00%3A49%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Domain%20Adaptive%20Video%20Semantic%20Segmentation%20via%20Cross-Domain%20Moving%20Object%20Mixing&rft.jtitle=arXiv.org&rft.au=Cho,%20Kyusik&rft.date=2023-01-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2732904093%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27329040933%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2732904093&rft_id=info:pmid/&rfr_iscdi=true |