Loading…

AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation

Unsupervised domain adaption (UDA) is a transfer learning task where the data and annotations of the source domain are available but only have access to the unlabeled target data during training. Most previous methods try to minimise the domain gap by performing distribution alignment between the so...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-11
Main Authors: Shu, Xinyao, Shiyang Yan, Lu, Zhenyu, Wang, Xinshao, Xie, Yuan
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Shu, Xinyao
Shiyang Yan
Lu, Zhenyu
Wang, Xinshao
Xie, Yuan
description Unsupervised domain adaption (UDA) is a transfer learning task where the data and annotations of the source domain are available but only have access to the unlabeled target data during training. Most previous methods try to minimise the domain gap by performing distribution alignment between the source and target domains, which has a notable limitation, i.e., operating at the domain level, but neglecting the sample-level differences. To mitigate this weakness, we propose to improve the unsupervised domain adaptation task with an inter-domain sample matching scheme. We apply the widely-used and robust Triplet loss to match the inter-domain samples. To reduce the catastrophic effect of the inaccurate pseudo-labels generated during training, we propose a novel uncertainty measurement method to select reliable pseudo-labels automatically and progressively refine them. We apply the advanced discrete relaxation Gumbel Softmax technique to realise an adaptive Topk scheme to fulfil the functionality. In addition, to enable the global ranking optimisation within one batch for the domain matching, the whole model is optimised via a novel reinforced attention mechanism with supervision from the policy gradient algorithm, using the Average Precision (AP) as the reward. Our model (termed \textbf{\textit{AdaTriplet-RA}}) achieves State-of-the-art results on several public benchmark datasets, and its effectiveness is validated via comprehensive ablation studies. Our method improves the accuracy of the baseline by 9.7\% (ResNet-101) and 6.2\% (ResNet-50) on the VisDa dataset and 4.22\% (ResNet-50) on the Domainnet dataset. {The source code is publicly available at \textit{https://github.com/shuxy0120/AdaTriplet-RA}}.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2737266685</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2737266685</sourcerecordid><originalsourceid>FETCH-proquest_journals_27372666853</originalsourceid><addsrcrecordid>eNqNjF0LgjAYRkcQJOV_eKFrwbac0p30QTfdiF3L0FkTe2fb9Pe3wh_Q1QPPOZwFCShjuyjbU7oiobVdHMeUpzRJWEB03ojSqKGXLiryA5z0SyiEm3D1U-EDJiXAK4NTk4RZBIENFFJhq00tG8idk-iURvAH3NGOgzSTsh7NuV9BfJUNWbaitzKcd022l3N5vEaD0e9RWld1ejToUUVTllLOeZaw_6wPKSZKMA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2737266685</pqid></control><display><type>article</type><title>AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation</title><source>Publicly Available Content Database</source><creator>Shu, Xinyao ; Shiyang Yan ; Lu, Zhenyu ; Wang, Xinshao ; Xie, Yuan</creator><creatorcontrib>Shu, Xinyao ; Shiyang Yan ; Lu, Zhenyu ; Wang, Xinshao ; Xie, Yuan</creatorcontrib><description>Unsupervised domain adaption (UDA) is a transfer learning task where the data and annotations of the source domain are available but only have access to the unlabeled target data during training. Most previous methods try to minimise the domain gap by performing distribution alignment between the source and target domains, which has a notable limitation, i.e., operating at the domain level, but neglecting the sample-level differences. To mitigate this weakness, we propose to improve the unsupervised domain adaptation task with an inter-domain sample matching scheme. We apply the widely-used and robust Triplet loss to match the inter-domain samples. To reduce the catastrophic effect of the inaccurate pseudo-labels generated during training, we propose a novel uncertainty measurement method to select reliable pseudo-labels automatically and progressively refine them. We apply the advanced discrete relaxation Gumbel Softmax technique to realise an adaptive Topk scheme to fulfil the functionality. In addition, to enable the global ranking optimisation within one batch for the domain matching, the whole model is optimised via a novel reinforced attention mechanism with supervision from the policy gradient algorithm, using the Average Precision (AP) as the reward. Our model (termed \textbf{\textit{AdaTriplet-RA}}) achieves State-of-the-art results on several public benchmark datasets, and its effectiveness is validated via comprehensive ablation studies. Our method improves the accuracy of the baseline by 9.7\% (ResNet-101) and 6.2\% (ResNet-50) on the VisDa dataset and 4.22\% (ResNet-50) on the Domainnet dataset. {The source code is publicly available at \textit{https://github.com/shuxy0120/AdaTriplet-RA}}.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Ablation ; Adaptation ; Algorithms ; Annotations ; Cognitive tasks ; Datasets ; Domains ; Labels ; Matching ; Measurement methods ; Optimization ; Source code ; Training</subject><ispartof>arXiv.org, 2022-11</ispartof><rights>2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2737266685?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Shu, Xinyao</creatorcontrib><creatorcontrib>Shiyang Yan</creatorcontrib><creatorcontrib>Lu, Zhenyu</creatorcontrib><creatorcontrib>Wang, Xinshao</creatorcontrib><creatorcontrib>Xie, Yuan</creatorcontrib><title>AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation</title><title>arXiv.org</title><description>Unsupervised domain adaption (UDA) is a transfer learning task where the data and annotations of the source domain are available but only have access to the unlabeled target data during training. Most previous methods try to minimise the domain gap by performing distribution alignment between the source and target domains, which has a notable limitation, i.e., operating at the domain level, but neglecting the sample-level differences. To mitigate this weakness, we propose to improve the unsupervised domain adaptation task with an inter-domain sample matching scheme. We apply the widely-used and robust Triplet loss to match the inter-domain samples. To reduce the catastrophic effect of the inaccurate pseudo-labels generated during training, we propose a novel uncertainty measurement method to select reliable pseudo-labels automatically and progressively refine them. We apply the advanced discrete relaxation Gumbel Softmax technique to realise an adaptive Topk scheme to fulfil the functionality. In addition, to enable the global ranking optimisation within one batch for the domain matching, the whole model is optimised via a novel reinforced attention mechanism with supervision from the policy gradient algorithm, using the Average Precision (AP) as the reward. Our model (termed \textbf{\textit{AdaTriplet-RA}}) achieves State-of-the-art results on several public benchmark datasets, and its effectiveness is validated via comprehensive ablation studies. Our method improves the accuracy of the baseline by 9.7\% (ResNet-101) and 6.2\% (ResNet-50) on the VisDa dataset and 4.22\% (ResNet-50) on the Domainnet dataset. {The source code is publicly available at \textit{https://github.com/shuxy0120/AdaTriplet-RA}}.</description><subject>Ablation</subject><subject>Adaptation</subject><subject>Algorithms</subject><subject>Annotations</subject><subject>Cognitive tasks</subject><subject>Datasets</subject><subject>Domains</subject><subject>Labels</subject><subject>Matching</subject><subject>Measurement methods</subject><subject>Optimization</subject><subject>Source code</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjF0LgjAYRkcQJOV_eKFrwbac0p30QTfdiF3L0FkTe2fb9Pe3wh_Q1QPPOZwFCShjuyjbU7oiobVdHMeUpzRJWEB03ojSqKGXLiryA5z0SyiEm3D1U-EDJiXAK4NTk4RZBIENFFJhq00tG8idk-iURvAH3NGOgzSTsh7NuV9BfJUNWbaitzKcd022l3N5vEaD0e9RWld1ejToUUVTllLOeZaw_6wPKSZKMA</recordid><startdate>20221116</startdate><enddate>20221116</enddate><creator>Shu, Xinyao</creator><creator>Shiyang Yan</creator><creator>Lu, Zhenyu</creator><creator>Wang, Xinshao</creator><creator>Xie, Yuan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221116</creationdate><title>AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation</title><author>Shu, Xinyao ; Shiyang Yan ; Lu, Zhenyu ; Wang, Xinshao ; Xie, Yuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27372666853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Ablation</topic><topic>Adaptation</topic><topic>Algorithms</topic><topic>Annotations</topic><topic>Cognitive tasks</topic><topic>Datasets</topic><topic>Domains</topic><topic>Labels</topic><topic>Matching</topic><topic>Measurement methods</topic><topic>Optimization</topic><topic>Source code</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Shu, Xinyao</creatorcontrib><creatorcontrib>Shiyang Yan</creatorcontrib><creatorcontrib>Lu, Zhenyu</creatorcontrib><creatorcontrib>Wang, Xinshao</creatorcontrib><creatorcontrib>Xie, Yuan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shu, Xinyao</au><au>Shiyang Yan</au><au>Lu, Zhenyu</au><au>Wang, Xinshao</au><au>Xie, Yuan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation</atitle><jtitle>arXiv.org</jtitle><date>2022-11-16</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>Unsupervised domain adaption (UDA) is a transfer learning task where the data and annotations of the source domain are available but only have access to the unlabeled target data during training. Most previous methods try to minimise the domain gap by performing distribution alignment between the source and target domains, which has a notable limitation, i.e., operating at the domain level, but neglecting the sample-level differences. To mitigate this weakness, we propose to improve the unsupervised domain adaptation task with an inter-domain sample matching scheme. We apply the widely-used and robust Triplet loss to match the inter-domain samples. To reduce the catastrophic effect of the inaccurate pseudo-labels generated during training, we propose a novel uncertainty measurement method to select reliable pseudo-labels automatically and progressively refine them. We apply the advanced discrete relaxation Gumbel Softmax technique to realise an adaptive Topk scheme to fulfil the functionality. In addition, to enable the global ranking optimisation within one batch for the domain matching, the whole model is optimised via a novel reinforced attention mechanism with supervision from the policy gradient algorithm, using the Average Precision (AP) as the reward. Our model (termed \textbf{\textit{AdaTriplet-RA}}) achieves State-of-the-art results on several public benchmark datasets, and its effectiveness is validated via comprehensive ablation studies. Our method improves the accuracy of the baseline by 9.7\% (ResNet-101) and 6.2\% (ResNet-50) on the VisDa dataset and 4.22\% (ResNet-50) on the Domainnet dataset. {The source code is publicly available at \textit{https://github.com/shuxy0120/AdaTriplet-RA}}.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2737266685
source Publicly Available Content Database
subjects Ablation
Adaptation
Algorithms
Annotations
Cognitive tasks
Datasets
Domains
Labels
Matching
Measurement methods
Optimization
Source code
Training
title AdaTriplet-RA: Domain Matching via Adaptive Triplet and Reinforced Attention for Unsupervised Domain Adaptation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T02%3A07%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=AdaTriplet-RA:%20Domain%20Matching%20via%20Adaptive%20Triplet%20and%20Reinforced%20Attention%20for%20Unsupervised%20Domain%20Adaptation&rft.jtitle=arXiv.org&rft.au=Shu,%20Xinyao&rft.date=2022-11-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2737266685%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27372666853%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2737266685&rft_id=info:pmid/&rfr_iscdi=true