Loading…
Domain Adaptive Ensemble Learning
The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unifie...
Saved in:
Published in: | IEEE transactions on image processing 2021, Vol.30, p.8008-8018 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3 |
---|---|
cites | cdi_FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3 |
container_end_page | 8018 |
container_issue | |
container_start_page | 8008 |
container_title | IEEE transactions on image processing |
container_volume | 30 |
creator | Zhou, Kaiyang Yang, Yongxin Qiao, Yu Xiang, Tao |
description | The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins. |
doi_str_mv | 10.1109/TIP.2021.3112012 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2575979731</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9540778</ieee_id><sourcerecordid>2574405204</sourcerecordid><originalsourceid>FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3</originalsourceid><addsrcrecordid>eNpdkDtLA0EUhQdRTHz0gk3ExmbjvfPI7JQhRg0EtIj1MDN7RzbsI-4kgv_eDQkWVvcU3zlcPsZuEMaIYB5Xi_cxB45jgcgB-QkbopGYAUh-2mdQOtMozYBdpLQGQKlwcs4GQiohIcchu3tqa1c2o2nhNtvym0bzJlHtKxotyXVN2XxesbPoqkTXx3vJPp7nq9lrtnx7WcymyywIjdusAOl8DjF6N6GguKHoC85zzLUrVC4kcuRBQAhRuGAEjw6kVx6jEEGhF5fs4bC76dqvHaWtrcsUqKpcQ-0uWa60lKA4yB69_4eu213X9N_tKWW00QJ7Cg5U6NqUOop205W1634sgt3rs70-u9dnj_r6yu2hUhLRH26UBK1z8QvFaWbq</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2575979731</pqid></control><display><type>article</type><title>Domain Adaptive Ensemble Learning</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Zhou, Kaiyang ; Yang, Yongxin ; Qiao, Yu ; Xiang, Tao</creator><creatorcontrib>Zhou, Kaiyang ; Yang, Yongxin ; Qiao, Yu ; Xiang, Tao</creatorcontrib><description>The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.</description><identifier>ISSN: 1057-7149</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2021.3112012</identifier><identifier>PMID: 34534081</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation models ; Artificial neural networks ; Classifiers ; Collaboration ; collaborative ensemble learning ; Computational modeling ; Datasets ; Domain adaptation ; domain generalization ; Domains ; Ensemble learning ; Feature extraction ; Head ; Machine learning ; Neural networks ; Training</subject><ispartof>IEEE transactions on image processing, 2021, Vol.30, p.8008-8018</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3</citedby><cites>FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3</cites><orcidid>0000-0002-1889-2567 ; 0000-0002-8153-3903 ; 0000-0002-2530-1059</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9540778$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4022,27922,27923,27924,54795</link.rule.ids></links><search><creatorcontrib>Zhou, Kaiyang</creatorcontrib><creatorcontrib>Yang, Yongxin</creatorcontrib><creatorcontrib>Qiao, Yu</creatorcontrib><creatorcontrib>Xiang, Tao</creatorcontrib><title>Domain Adaptive Ensemble Learning</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><description>The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.</description><subject>Adaptation models</subject><subject>Artificial neural networks</subject><subject>Classifiers</subject><subject>Collaboration</subject><subject>collaborative ensemble learning</subject><subject>Computational modeling</subject><subject>Datasets</subject><subject>Domain adaptation</subject><subject>domain generalization</subject><subject>Domains</subject><subject>Ensemble learning</subject><subject>Feature extraction</subject><subject>Head</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Training</subject><issn>1057-7149</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpdkDtLA0EUhQdRTHz0gk3ExmbjvfPI7JQhRg0EtIj1MDN7RzbsI-4kgv_eDQkWVvcU3zlcPsZuEMaIYB5Xi_cxB45jgcgB-QkbopGYAUh-2mdQOtMozYBdpLQGQKlwcs4GQiohIcchu3tqa1c2o2nhNtvym0bzJlHtKxotyXVN2XxesbPoqkTXx3vJPp7nq9lrtnx7WcymyywIjdusAOl8DjF6N6GguKHoC85zzLUrVC4kcuRBQAhRuGAEjw6kVx6jEEGhF5fs4bC76dqvHaWtrcsUqKpcQ-0uWa60lKA4yB69_4eu213X9N_tKWW00QJ7Cg5U6NqUOop205W1634sgt3rs70-u9dnj_r6yu2hUhLRH26UBK1z8QvFaWbq</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Zhou, Kaiyang</creator><creator>Yang, Yongxin</creator><creator>Qiao, Yu</creator><creator>Xiang, Tao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1889-2567</orcidid><orcidid>https://orcid.org/0000-0002-8153-3903</orcidid><orcidid>https://orcid.org/0000-0002-2530-1059</orcidid></search><sort><creationdate>2021</creationdate><title>Domain Adaptive Ensemble Learning</title><author>Zhou, Kaiyang ; Yang, Yongxin ; Qiao, Yu ; Xiang, Tao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Adaptation models</topic><topic>Artificial neural networks</topic><topic>Classifiers</topic><topic>Collaboration</topic><topic>collaborative ensemble learning</topic><topic>Computational modeling</topic><topic>Datasets</topic><topic>Domain adaptation</topic><topic>domain generalization</topic><topic>Domains</topic><topic>Ensemble learning</topic><topic>Feature extraction</topic><topic>Head</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Kaiyang</creatorcontrib><creatorcontrib>Yang, Yongxin</creatorcontrib><creatorcontrib>Qiao, Yu</creatorcontrib><creatorcontrib>Xiang, Tao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE/IET Electronic Library</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Kaiyang</au><au>Yang, Yongxin</au><au>Qiao, Yu</au><au>Xiang, Tao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Domain Adaptive Ensemble Learning</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><date>2021</date><risdate>2021</risdate><volume>30</volume><spage>8008</spage><epage>8018</epage><pages>8008-8018</pages><issn>1057-7149</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain but a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. To deal with unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo labels to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins.</abstract><cop>New York</cop><pub>IEEE</pub><pmid>34534081</pmid><doi>10.1109/TIP.2021.3112012</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-1889-2567</orcidid><orcidid>https://orcid.org/0000-0002-8153-3903</orcidid><orcidid>https://orcid.org/0000-0002-2530-1059</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2021, Vol.30, p.8008-8018 |
issn | 1057-7149 1941-0042 |
language | eng |
recordid | cdi_proquest_journals_2575979731 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Adaptation models Artificial neural networks Classifiers Collaboration collaborative ensemble learning Computational modeling Datasets Domain adaptation domain generalization Domains Ensemble learning Feature extraction Head Machine learning Neural networks Training |
title | Domain Adaptive Ensemble Learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T09%3A42%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Domain%20Adaptive%20Ensemble%20Learning&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Zhou,%20Kaiyang&rft.date=2021&rft.volume=30&rft.spage=8008&rft.epage=8018&rft.pages=8008-8018&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2021.3112012&rft_dat=%3Cproquest_cross%3E2574405204%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c371t-d04ab80ffba6ec529efbd228187ad58341212c30ccf3ac932fa04b5b1f33c51b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2575979731&rft_id=info:pmid/34534081&rft_ieee_id=9540778&rfr_iscdi=true |