Loading…
A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles
For classifier ensembles, an effective combination method is to combine the outputs of each classifier using a linearly weighted combination rule. There are multiple ways to linearly combine classifier outputs and it is beneficial to analyze them as a whole. We present a unifying framework for multi...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 2988 |
container_issue | |
container_start_page | 2985 |
container_title | |
container_volume | |
creator | Erdogan, H Sen, M U |
description | For classifier ensembles, an effective combination method is to combine the outputs of each classifier using a linearly weighted combination rule. There are multiple ways to linearly combine classifier outputs and it is beneficial to analyze them as a whole. We present a unifying framework for multiple linear combination types in this paper. This unification enables using the same learning algorithms for different types of linear combiners. We present various ways to train the weights using regularized empirical loss minimization. We propose using the hinge loss for better performance as compared to the conventional least-squares loss. We analyze the effects of using hinge loss for various types of linear weight training by running experiments on three different databases. We show that, in certain problems, linear combiners with fewer parameters may perform as well as the ones with much larger number of parameters even in the presence of regularization. |
doi_str_mv | 10.1109/ICPR.2010.731 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_5595979</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>5595979</ieee_id><sourcerecordid>5595979</sourcerecordid><originalsourceid>FETCH-LOGICAL-i90t-5d997744986e70cd44257a53ad2e972ded6adbbc6f66a97bce9cdd8f29125ea53</originalsourceid><addsrcrecordid>eNo1jEtPwzAQhM1LopQeOXHxHwjYjtebPVZRC5UiFaFyrpx4AxZNipxKqP-e8DrNfDOjEeJGqzutFd2vyqfnO6NGxFyfiBlhoa2xFsFqeyompsh1hiOeiav_wphzMdEKdGYd6EsxG4ZYK-PQIQBMxHouX_rYHmP_KpfJd_y5T--y3SdZsU_9d3x4Y1nFfkRZ7rt6dGn4WZQ7P761kZNc9AN39Y6Ha3HR-t3Asz-dis1ysSkfs2r9sCrnVRZJHTIIRIjWUuEYVROsNYAech8ME5rAwflQ141rnfOEdcPUhFC0hrQBHodTcft7G5l5-5Fi59NxC0BASPkXGNtTEQ</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles</title><source>IEEE Xplore All Conference Series</source><creator>Erdogan, H ; Sen, M U</creator><creatorcontrib>Erdogan, H ; Sen, M U</creatorcontrib><description>For classifier ensembles, an effective combination method is to combine the outputs of each classifier using a linearly weighted combination rule. There are multiple ways to linearly combine classifier outputs and it is beneficial to analyze them as a whole. We present a unifying framework for multiple linear combination types in this paper. This unification enables using the same learning algorithms for different types of linear combiners. We present various ways to train the weights using regularized empirical loss minimization. We propose using the hinge loss for better performance as compared to the conventional least-squares loss. We analyze the effects of using hinge loss for various types of linear weight training by running experiments on three different databases. We show that, in certain problems, linear combiners with fewer parameters may perform as well as the ones with much larger number of parameters even in the presence of regularization.</description><identifier>ISSN: 1051-4651</identifier><identifier>ISBN: 1424475422</identifier><identifier>ISBN: 9781424475421</identifier><identifier>EISSN: 2831-7475</identifier><identifier>EISBN: 9781424475414</identifier><identifier>EISBN: 9780769541099</identifier><identifier>EISBN: 1424475414</identifier><identifier>EISBN: 0769541097</identifier><identifier>DOI: 10.1109/ICPR.2010.731</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; classifier fusion ; Fasteners ; linear classifier learning ; linear combiners ; Minimization ; stacked generalization ; Training ; Training data ; Vectors</subject><ispartof>2010 20th International Conference on Pattern Recognition, 2010, p.2985-2988</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/5595979$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,2052,27904,54533,54898,54910</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/5595979$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Erdogan, H</creatorcontrib><creatorcontrib>Sen, M U</creatorcontrib><title>A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles</title><title>2010 20th International Conference on Pattern Recognition</title><addtitle>ICPR</addtitle><description>For classifier ensembles, an effective combination method is to combine the outputs of each classifier using a linearly weighted combination rule. There are multiple ways to linearly combine classifier outputs and it is beneficial to analyze them as a whole. We present a unifying framework for multiple linear combination types in this paper. This unification enables using the same learning algorithms for different types of linear combiners. We present various ways to train the weights using regularized empirical loss minimization. We propose using the hinge loss for better performance as compared to the conventional least-squares loss. We analyze the effects of using hinge loss for various types of linear weight training by running experiments on three different databases. We show that, in certain problems, linear combiners with fewer parameters may perform as well as the ones with much larger number of parameters even in the presence of regularization.</description><subject>Accuracy</subject><subject>classifier fusion</subject><subject>Fasteners</subject><subject>linear classifier learning</subject><subject>linear combiners</subject><subject>Minimization</subject><subject>stacked generalization</subject><subject>Training</subject><subject>Training data</subject><subject>Vectors</subject><issn>1051-4651</issn><issn>2831-7475</issn><isbn>1424475422</isbn><isbn>9781424475421</isbn><isbn>9781424475414</isbn><isbn>9780769541099</isbn><isbn>1424475414</isbn><isbn>0769541097</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2010</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1jEtPwzAQhM1LopQeOXHxHwjYjtebPVZRC5UiFaFyrpx4AxZNipxKqP-e8DrNfDOjEeJGqzutFd2vyqfnO6NGxFyfiBlhoa2xFsFqeyompsh1hiOeiav_wphzMdEKdGYd6EsxG4ZYK-PQIQBMxHouX_rYHmP_KpfJd_y5T--y3SdZsU_9d3x4Y1nFfkRZ7rt6dGn4WZQ7P761kZNc9AN39Y6Ha3HR-t3Asz-dis1ysSkfs2r9sCrnVRZJHTIIRIjWUuEYVROsNYAech8ME5rAwflQ141rnfOEdcPUhFC0hrQBHodTcft7G5l5-5Fi59NxC0BASPkXGNtTEQ</recordid><startdate>201008</startdate><enddate>201008</enddate><creator>Erdogan, H</creator><creator>Sen, M U</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201008</creationdate><title>A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles</title><author>Erdogan, H ; Sen, M U</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i90t-5d997744986e70cd44257a53ad2e972ded6adbbc6f66a97bce9cdd8f29125ea53</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2010</creationdate><topic>Accuracy</topic><topic>classifier fusion</topic><topic>Fasteners</topic><topic>linear classifier learning</topic><topic>linear combiners</topic><topic>Minimization</topic><topic>stacked generalization</topic><topic>Training</topic><topic>Training data</topic><topic>Vectors</topic><toplevel>online_resources</toplevel><creatorcontrib>Erdogan, H</creatorcontrib><creatorcontrib>Sen, M U</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Erdogan, H</au><au>Sen, M U</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles</atitle><btitle>2010 20th International Conference on Pattern Recognition</btitle><stitle>ICPR</stitle><date>2010-08</date><risdate>2010</risdate><spage>2985</spage><epage>2988</epage><pages>2985-2988</pages><issn>1051-4651</issn><eissn>2831-7475</eissn><isbn>1424475422</isbn><isbn>9781424475421</isbn><eisbn>9781424475414</eisbn><eisbn>9780769541099</eisbn><eisbn>1424475414</eisbn><eisbn>0769541097</eisbn><abstract>For classifier ensembles, an effective combination method is to combine the outputs of each classifier using a linearly weighted combination rule. There are multiple ways to linearly combine classifier outputs and it is beneficial to analyze them as a whole. We present a unifying framework for multiple linear combination types in this paper. This unification enables using the same learning algorithms for different types of linear combiners. We present various ways to train the weights using regularized empirical loss minimization. We propose using the hinge loss for better performance as compared to the conventional least-squares loss. We analyze the effects of using hinge loss for various types of linear weight training by running experiments on three different databases. We show that, in certain problems, linear combiners with fewer parameters may perform as well as the ones with much larger number of parameters even in the presence of regularization.</abstract><pub>IEEE</pub><doi>10.1109/ICPR.2010.731</doi><tpages>4</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-4651 |
ispartof | 2010 20th International Conference on Pattern Recognition, 2010, p.2985-2988 |
issn | 1051-4651 2831-7475 |
language | eng |
recordid | cdi_ieee_primary_5595979 |
source | IEEE Xplore All Conference Series |
subjects | Accuracy classifier fusion Fasteners linear classifier learning linear combiners Minimization stacked generalization Training Training data Vectors |
title | A Unifying Framework for Learning the Linear Combiners for Classifier Ensembles |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T10%3A15%3A17IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=A%20Unifying%20Framework%20for%20Learning%20the%20Linear%20Combiners%20for%20Classifier%20Ensembles&rft.btitle=2010%2020th%20International%20Conference%20on%20Pattern%20Recognition&rft.au=Erdogan,%20H&rft.date=2010-08&rft.spage=2985&rft.epage=2988&rft.pages=2985-2988&rft.issn=1051-4651&rft.eissn=2831-7475&rft.isbn=1424475422&rft.isbn_list=9781424475421&rft_id=info:doi/10.1109/ICPR.2010.731&rft.eisbn=9781424475414&rft.eisbn_list=9780769541099&rft.eisbn_list=1424475414&rft.eisbn_list=0769541097&rft_dat=%3Cieee_CHZPO%3E5595979%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i90t-5d997744986e70cd44257a53ad2e972ded6adbbc6f66a97bce9cdd8f29125ea53%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=5595979&rfr_iscdi=true |