Loading…
Domain Adaption via Feature Selection on Explicit Feature Map
In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain...
Saved in:
Published in: | IEEE transaction on neural networks and learning systems 2019-04, Vol.30 (4), p.1180-1190 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23 |
---|---|
cites | cdi_FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23 |
container_end_page | 1190 |
container_issue | 4 |
container_start_page | 1180 |
container_title | IEEE transaction on neural networks and learning systems |
container_volume | 30 |
creator | Deng, Wan-Yu Lendasse, Amaury Ong, Yew-Soon Tsang, Ivor Wai-Hung Chen, Lin Zheng, Qing-Hua |
description | In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space. Moreover, the discriminative ability of some invariant features such as shared background information is weak, and needs to be further filtered. Therefore, in this paper, we propose a novel domain adaption algorithm based on an explicit feature map and feature selection. The data are first represented by a kernel-induced explicit feature map, such that high-order invariant features can be revealed. Then, by minimizing the marginal distribution difference, conditional distribution difference, and the model error, the invariant discriminative features are effectively selected. This problem is NP-hard to be solved, and we propose to relax it and solve it by a cutting plane algorithm. Experimental results on six real-world benchmarks have demonstrated the effectiveness and efficiency of the proposed algorithm, which outperforms many state-of-the-art domain adaption approaches. |
doi_str_mv | 10.1109/TNNLS.2018.2863240 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_miscellaneous_2099435604</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8449326</ieee_id><sourcerecordid>2194172556</sourcerecordid><originalsourceid>FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23</originalsourceid><addsrcrecordid>eNpdkEtLAzEQgIMottT-AQVZ8OJla96bHDxIbVWo9dAK3kJ2dxZS9uU-RP-96cMeDANJZr4Zhg-hS4InhGB9t14uF6sJxURNqJKMcnyChpRIGlKm1OnxHX0M0LhtN9gfiYXk-hwNGCaRlFgN0f1jVVhXBg-prTtXlcGXs8EcbNc3EKwgh2SX9TH7rnOXuO5YfbX1BTrLbN7C-HCP0Pt8tp4-h4u3p5fpwyJMmBZdGBECMVjMBOEpJ35ZQnWsUskwKJmCSq2IcSwJzxKdZZoxIIJrzKJM-D9lI3S7n1s31WcPbWcK1yaQ57aEqm8NxVpzJiTmHr35h26qvin9doYSzUlEhZCeonsqaaq2bSAzdeMK2_wYgs3Wr9n5NVu_5uDXN10fRvdxAemx5c-mB672gAOAY1lxrhmV7BeIjnwb</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2194172556</pqid></control><display><type>article</type><title>Domain Adaption via Feature Selection on Explicit Feature Map</title><source>IEEE Xplore (Online service)</source><creator>Deng, Wan-Yu ; Lendasse, Amaury ; Ong, Yew-Soon ; Tsang, Ivor Wai-Hung ; Chen, Lin ; Zheng, Qing-Hua</creator><creatorcontrib>Deng, Wan-Yu ; Lendasse, Amaury ; Ong, Yew-Soon ; Tsang, Ivor Wai-Hung ; Chen, Lin ; Zheng, Qing-Hua</creatorcontrib><description>In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space. Moreover, the discriminative ability of some invariant features such as shared background information is weak, and needs to be further filtered. Therefore, in this paper, we propose a novel domain adaption algorithm based on an explicit feature map and feature selection. The data are first represented by a kernel-induced explicit feature map, such that high-order invariant features can be revealed. Then, by minimizing the marginal distribution difference, conditional distribution difference, and the model error, the invariant discriminative features are effectively selected. This problem is NP-hard to be solved, and we propose to relax it and solve it by a cutting plane algorithm. Experimental results on six real-world benchmarks have demonstrated the effectiveness and efficiency of the proposed algorithm, which outperforms many state-of-the-art domain adaption approaches.</description><identifier>ISSN: 2162-237X</identifier><identifier>EISSN: 2162-2388</identifier><identifier>DOI: 10.1109/TNNLS.2018.2863240</identifier><identifier>PMID: 30176608</identifier><identifier>CODEN: ITNNAL</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Adaptation models ; Algorithms ; Benchmarks ; Distribution distance ; domain adaption ; Domains ; Feature extraction ; Feature maps ; feature selection ; Invariants ; Kernel ; Learning systems ; Optimization ; Performance degradation ; Taylor series ; Telecommunications ; transfer learning</subject><ispartof>IEEE transaction on neural networks and learning systems, 2019-04, Vol.30 (4), p.1180-1190</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2019</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23</citedby><cites>FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23</cites><orcidid>0000-0002-9818-5562 ; 0000-0003-2211-8176 ; 0000-0002-4480-169X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8449326$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/30176608$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Deng, Wan-Yu</creatorcontrib><creatorcontrib>Lendasse, Amaury</creatorcontrib><creatorcontrib>Ong, Yew-Soon</creatorcontrib><creatorcontrib>Tsang, Ivor Wai-Hung</creatorcontrib><creatorcontrib>Chen, Lin</creatorcontrib><creatorcontrib>Zheng, Qing-Hua</creatorcontrib><title>Domain Adaption via Feature Selection on Explicit Feature Map</title><title>IEEE transaction on neural networks and learning systems</title><addtitle>TNNLS</addtitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><description>In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space. Moreover, the discriminative ability of some invariant features such as shared background information is weak, and needs to be further filtered. Therefore, in this paper, we propose a novel domain adaption algorithm based on an explicit feature map and feature selection. The data are first represented by a kernel-induced explicit feature map, such that high-order invariant features can be revealed. Then, by minimizing the marginal distribution difference, conditional distribution difference, and the model error, the invariant discriminative features are effectively selected. This problem is NP-hard to be solved, and we propose to relax it and solve it by a cutting plane algorithm. Experimental results on six real-world benchmarks have demonstrated the effectiveness and efficiency of the proposed algorithm, which outperforms many state-of-the-art domain adaption approaches.</description><subject>Adaptation models</subject><subject>Algorithms</subject><subject>Benchmarks</subject><subject>Distribution distance</subject><subject>domain adaption</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Feature maps</subject><subject>feature selection</subject><subject>Invariants</subject><subject>Kernel</subject><subject>Learning systems</subject><subject>Optimization</subject><subject>Performance degradation</subject><subject>Taylor series</subject><subject>Telecommunications</subject><subject>transfer learning</subject><issn>2162-237X</issn><issn>2162-2388</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><recordid>eNpdkEtLAzEQgIMottT-AQVZ8OJla96bHDxIbVWo9dAK3kJ2dxZS9uU-RP-96cMeDANJZr4Zhg-hS4InhGB9t14uF6sJxURNqJKMcnyChpRIGlKm1OnxHX0M0LhtN9gfiYXk-hwNGCaRlFgN0f1jVVhXBg-prTtXlcGXs8EcbNc3EKwgh2SX9TH7rnOXuO5YfbX1BTrLbN7C-HCP0Pt8tp4-h4u3p5fpwyJMmBZdGBECMVjMBOEpJ35ZQnWsUskwKJmCSq2IcSwJzxKdZZoxIIJrzKJM-D9lI3S7n1s31WcPbWcK1yaQ57aEqm8NxVpzJiTmHr35h26qvin9doYSzUlEhZCeonsqaaq2bSAzdeMK2_wYgs3Wr9n5NVu_5uDXN10fRvdxAemx5c-mB672gAOAY1lxrhmV7BeIjnwb</recordid><startdate>20190401</startdate><enddate>20190401</enddate><creator>Deng, Wan-Yu</creator><creator>Lendasse, Amaury</creator><creator>Ong, Yew-Soon</creator><creator>Tsang, Ivor Wai-Hung</creator><creator>Chen, Lin</creator><creator>Zheng, Qing-Hua</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QP</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-9818-5562</orcidid><orcidid>https://orcid.org/0000-0003-2211-8176</orcidid><orcidid>https://orcid.org/0000-0002-4480-169X</orcidid></search><sort><creationdate>20190401</creationdate><title>Domain Adaption via Feature Selection on Explicit Feature Map</title><author>Deng, Wan-Yu ; Lendasse, Amaury ; Ong, Yew-Soon ; Tsang, Ivor Wai-Hung ; Chen, Lin ; Zheng, Qing-Hua</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Adaptation models</topic><topic>Algorithms</topic><topic>Benchmarks</topic><topic>Distribution distance</topic><topic>domain adaption</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Feature maps</topic><topic>feature selection</topic><topic>Invariants</topic><topic>Kernel</topic><topic>Learning systems</topic><topic>Optimization</topic><topic>Performance degradation</topic><topic>Taylor series</topic><topic>Telecommunications</topic><topic>transfer learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Wan-Yu</creatorcontrib><creatorcontrib>Lendasse, Amaury</creatorcontrib><creatorcontrib>Ong, Yew-Soon</creatorcontrib><creatorcontrib>Tsang, Ivor Wai-Hung</creatorcontrib><creatorcontrib>Chen, Lin</creatorcontrib><creatorcontrib>Zheng, Qing-Hua</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transaction on neural networks and learning systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deng, Wan-Yu</au><au>Lendasse, Amaury</au><au>Ong, Yew-Soon</au><au>Tsang, Ivor Wai-Hung</au><au>Chen, Lin</au><au>Zheng, Qing-Hua</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Domain Adaption via Feature Selection on Explicit Feature Map</atitle><jtitle>IEEE transaction on neural networks and learning systems</jtitle><stitle>TNNLS</stitle><addtitle>IEEE Trans Neural Netw Learn Syst</addtitle><date>2019-04-01</date><risdate>2019</risdate><volume>30</volume><issue>4</issue><spage>1180</spage><epage>1190</epage><pages>1180-1190</pages><issn>2162-237X</issn><eissn>2162-2388</eissn><coden>ITNNAL</coden><abstract>In most domain adaption approaches, all features are used for domain adaption. However, often, not every feature is beneficial for domain adaption. In such cases, incorrectly involving all features might cause the performance to degrade. In other words, to make the model trained on the source domain work well on the target domain, it is desirable to find invariant features for domain adaption rather than using all features. However, invariant features across domains may lie in a higher order space, instead of in the original feature space. Moreover, the discriminative ability of some invariant features such as shared background information is weak, and needs to be further filtered. Therefore, in this paper, we propose a novel domain adaption algorithm based on an explicit feature map and feature selection. The data are first represented by a kernel-induced explicit feature map, such that high-order invariant features can be revealed. Then, by minimizing the marginal distribution difference, conditional distribution difference, and the model error, the invariant discriminative features are effectively selected. This problem is NP-hard to be solved, and we propose to relax it and solve it by a cutting plane algorithm. Experimental results on six real-world benchmarks have demonstrated the effectiveness and efficiency of the proposed algorithm, which outperforms many state-of-the-art domain adaption approaches.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>30176608</pmid><doi>10.1109/TNNLS.2018.2863240</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-9818-5562</orcidid><orcidid>https://orcid.org/0000-0003-2211-8176</orcidid><orcidid>https://orcid.org/0000-0002-4480-169X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2162-237X |
ispartof | IEEE transaction on neural networks and learning systems, 2019-04, Vol.30 (4), p.1180-1190 |
issn | 2162-237X 2162-2388 |
language | eng |
recordid | cdi_proquest_miscellaneous_2099435604 |
source | IEEE Xplore (Online service) |
subjects | Adaptation models Algorithms Benchmarks Distribution distance domain adaption Domains Feature extraction Feature maps feature selection Invariants Kernel Learning systems Optimization Performance degradation Taylor series Telecommunications transfer learning |
title | Domain Adaption via Feature Selection on Explicit Feature Map |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T09%3A51%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Domain%20Adaption%20via%20Feature%20Selection%20on%20Explicit%20Feature%20Map&rft.jtitle=IEEE%20transaction%20on%20neural%20networks%20and%20learning%20systems&rft.au=Deng,%20Wan-Yu&rft.date=2019-04-01&rft.volume=30&rft.issue=4&rft.spage=1180&rft.epage=1190&rft.pages=1180-1190&rft.issn=2162-237X&rft.eissn=2162-2388&rft.coden=ITNNAL&rft_id=info:doi/10.1109/TNNLS.2018.2863240&rft_dat=%3Cproquest_ieee_%3E2194172556%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c395t-711ebea03514d41324129b8d630e86de8da5b0b614fc9ff933e1549037f59ff23%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2194172556&rft_id=info:pmid/30176608&rft_ieee_id=8449326&rfr_iscdi=true |