Loading…

Self-paced Ensemble for Highly Imbalanced Massive Data Classification

Many real-world applications reveal difficulties in learning classifiers from imbalanced data. The rising big data era has been witnessing more classification tasks with large-scale but extremely imbalance and low-quality datasets. Most of existing learning methods suffer from poor performance or lo...

Full description

Saved in:
Bibliographic Details
Main Authors: Liu, Zhining, Cao, Wei, Gao, Zhifeng, Bian, Jiang, Chen, Hechang, Chang, Yi, Liu, Tie-Yan
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c254t-2212e4141c8596b13ae5538bf227d7a9af46050d036595e0fa4f4300dabe86ca3
cites
container_end_page 852
container_issue
container_start_page 841
container_title
container_volume
creator Liu, Zhining
Cao, Wei
Gao, Zhifeng
Bian, Jiang
Chen, Hechang
Chang, Yi
Liu, Tie-Yan
description Many real-world applications reveal difficulties in learning classifiers from imbalanced data. The rising big data era has been witnessing more classification tasks with large-scale but extremely imbalance and low-quality datasets. Most of existing learning methods suffer from poor performance or low computation efficiency under such a scenario. To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers. Taking those factors into consideration, we propose a novel framework for imbalance classification that aims to generate a strong ensemble by self-paced harmonizing data hardness via under-sampling. Extensive experiments have shown that this new framework, while being very computationally efficient, can lead to robust performance even under highly overlapping classes and extremely skewed distribution. Note that, our methods can be easily adapted to most of existing learning methods (e.g., C4.5, SVM, GBDT and Neural Network) to boost their performance on imbalanced data.
doi_str_mv 10.1109/ICDE48307.2020.00078
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9101851</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9101851</ieee_id><sourcerecordid>9101851</sourcerecordid><originalsourceid>FETCH-LOGICAL-c254t-2212e4141c8596b13ae5538bf227d7a9af46050d036595e0fa4f4300dabe86ca3</originalsourceid><addsrcrecordid>eNotjs1Kw0AUhUdBsNY-gS7yAqn3zk9mZilpbAMVFyq4KzfJHY1M0pIEoW9vi64Oh-_wcYS4R1gign8o81WhnQK7lCBhCQDWXYgbtNKh9KCySzGTypoUZPZxLRbj-H3agNeIBmaieOUY0gPV3CRFP3JXRU7Cfkg27edXPCZlV1Gk_oyfaRzbH05WNFGSx3MLbU1Tu-9vxVWgOPLiP-fi_al4yzfp9mVd5o_btJZGT6mUKFmjxtoZn1WoiI1RrgpS2saSp6AzMNCcXhtvGALpoBVAQxW7rCY1F3d_3paZd4eh7Wg47jwCOoPqF4EQSsc</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Self-paced Ensemble for Highly Imbalanced Massive Data Classification</title><source>IEEE Xplore All Conference Series</source><creator>Liu, Zhining ; Cao, Wei ; Gao, Zhifeng ; Bian, Jiang ; Chen, Hechang ; Chang, Yi ; Liu, Tie-Yan</creator><creatorcontrib>Liu, Zhining ; Cao, Wei ; Gao, Zhifeng ; Bian, Jiang ; Chen, Hechang ; Chang, Yi ; Liu, Tie-Yan</creatorcontrib><description>Many real-world applications reveal difficulties in learning classifiers from imbalanced data. The rising big data era has been witnessing more classification tasks with large-scale but extremely imbalance and low-quality datasets. Most of existing learning methods suffer from poor performance or low computation efficiency under such a scenario. To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers. Taking those factors into consideration, we propose a novel framework for imbalance classification that aims to generate a strong ensemble by self-paced harmonizing data hardness via under-sampling. Extensive experiments have shown that this new framework, while being very computationally efficient, can lead to robust performance even under highly overlapping classes and extremely skewed distribution. Note that, our methods can be easily adapted to most of existing learning methods (e.g., C4.5, SVM, GBDT and Neural Network) to boost their performance on imbalanced data.</description><identifier>EISSN: 2375-026X</identifier><identifier>EISBN: 1728129036</identifier><identifier>EISBN: 9781728129037</identifier><identifier>DOI: 10.1109/ICDE48307.2020.00078</identifier><language>eng</language><publisher>IEEE</publisher><subject>data re-sampling ; ensemble learning ; imbalance classification ; imbalance learning ; Learning systems ; Neural networks ; Noise measurement ; Robustness ; Support vector machines ; Task analysis ; Training</subject><ispartof>2020 IEEE 36th International Conference on Data Engineering (ICDE), 2020, p.841-852</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c254t-2212e4141c8596b13ae5538bf227d7a9af46050d036595e0fa4f4300dabe86ca3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9101851$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9101851$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liu, Zhining</creatorcontrib><creatorcontrib>Cao, Wei</creatorcontrib><creatorcontrib>Gao, Zhifeng</creatorcontrib><creatorcontrib>Bian, Jiang</creatorcontrib><creatorcontrib>Chen, Hechang</creatorcontrib><creatorcontrib>Chang, Yi</creatorcontrib><creatorcontrib>Liu, Tie-Yan</creatorcontrib><title>Self-paced Ensemble for Highly Imbalanced Massive Data Classification</title><title>2020 IEEE 36th International Conference on Data Engineering (ICDE)</title><addtitle>ICDE</addtitle><description>Many real-world applications reveal difficulties in learning classifiers from imbalanced data. The rising big data era has been witnessing more classification tasks with large-scale but extremely imbalance and low-quality datasets. Most of existing learning methods suffer from poor performance or low computation efficiency under such a scenario. To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers. Taking those factors into consideration, we propose a novel framework for imbalance classification that aims to generate a strong ensemble by self-paced harmonizing data hardness via under-sampling. Extensive experiments have shown that this new framework, while being very computationally efficient, can lead to robust performance even under highly overlapping classes and extremely skewed distribution. Note that, our methods can be easily adapted to most of existing learning methods (e.g., C4.5, SVM, GBDT and Neural Network) to boost their performance on imbalanced data.</description><subject>data re-sampling</subject><subject>ensemble learning</subject><subject>imbalance classification</subject><subject>imbalance learning</subject><subject>Learning systems</subject><subject>Neural networks</subject><subject>Noise measurement</subject><subject>Robustness</subject><subject>Support vector machines</subject><subject>Task analysis</subject><subject>Training</subject><issn>2375-026X</issn><isbn>1728129036</isbn><isbn>9781728129037</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2020</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjs1Kw0AUhUdBsNY-gS7yAqn3zk9mZilpbAMVFyq4KzfJHY1M0pIEoW9vi64Oh-_wcYS4R1gign8o81WhnQK7lCBhCQDWXYgbtNKh9KCySzGTypoUZPZxLRbj-H3agNeIBmaieOUY0gPV3CRFP3JXRU7Cfkg27edXPCZlV1Gk_oyfaRzbH05WNFGSx3MLbU1Tu-9vxVWgOPLiP-fi_al4yzfp9mVd5o_btJZGT6mUKFmjxtoZn1WoiI1RrgpS2saSp6AzMNCcXhtvGALpoBVAQxW7rCY1F3d_3paZd4eh7Wg47jwCOoPqF4EQSsc</recordid><startdate>202004</startdate><enddate>202004</enddate><creator>Liu, Zhining</creator><creator>Cao, Wei</creator><creator>Gao, Zhifeng</creator><creator>Bian, Jiang</creator><creator>Chen, Hechang</creator><creator>Chang, Yi</creator><creator>Liu, Tie-Yan</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>202004</creationdate><title>Self-paced Ensemble for Highly Imbalanced Massive Data Classification</title><author>Liu, Zhining ; Cao, Wei ; Gao, Zhifeng ; Bian, Jiang ; Chen, Hechang ; Chang, Yi ; Liu, Tie-Yan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c254t-2212e4141c8596b13ae5538bf227d7a9af46050d036595e0fa4f4300dabe86ca3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2020</creationdate><topic>data re-sampling</topic><topic>ensemble learning</topic><topic>imbalance classification</topic><topic>imbalance learning</topic><topic>Learning systems</topic><topic>Neural networks</topic><topic>Noise measurement</topic><topic>Robustness</topic><topic>Support vector machines</topic><topic>Task analysis</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Zhining</creatorcontrib><creatorcontrib>Cao, Wei</creatorcontrib><creatorcontrib>Gao, Zhifeng</creatorcontrib><creatorcontrib>Bian, Jiang</creatorcontrib><creatorcontrib>Chen, Hechang</creatorcontrib><creatorcontrib>Chang, Yi</creatorcontrib><creatorcontrib>Liu, Tie-Yan</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE/IET Electronic Library</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Zhining</au><au>Cao, Wei</au><au>Gao, Zhifeng</au><au>Bian, Jiang</au><au>Chen, Hechang</au><au>Chang, Yi</au><au>Liu, Tie-Yan</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Self-paced Ensemble for Highly Imbalanced Massive Data Classification</atitle><btitle>2020 IEEE 36th International Conference on Data Engineering (ICDE)</btitle><stitle>ICDE</stitle><date>2020-04</date><risdate>2020</risdate><spage>841</spage><epage>852</epage><pages>841-852</pages><eissn>2375-026X</eissn><eisbn>1728129036</eisbn><eisbn>9781728129037</eisbn><abstract>Many real-world applications reveal difficulties in learning classifiers from imbalanced data. The rising big data era has been witnessing more classification tasks with large-scale but extremely imbalance and low-quality datasets. Most of existing learning methods suffer from poor performance or low computation efficiency under such a scenario. To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers. Taking those factors into consideration, we propose a novel framework for imbalance classification that aims to generate a strong ensemble by self-paced harmonizing data hardness via under-sampling. Extensive experiments have shown that this new framework, while being very computationally efficient, can lead to robust performance even under highly overlapping classes and extremely skewed distribution. Note that, our methods can be easily adapted to most of existing learning methods (e.g., C4.5, SVM, GBDT and Neural Network) to boost their performance on imbalanced data.</abstract><pub>IEEE</pub><doi>10.1109/ICDE48307.2020.00078</doi><tpages>12</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2375-026X
ispartof 2020 IEEE 36th International Conference on Data Engineering (ICDE), 2020, p.841-852
issn 2375-026X
language eng
recordid cdi_ieee_primary_9101851
source IEEE Xplore All Conference Series
subjects data re-sampling
ensemble learning
imbalance classification
imbalance learning
Learning systems
Neural networks
Noise measurement
Robustness
Support vector machines
Task analysis
Training
title Self-paced Ensemble for Highly Imbalanced Massive Data Classification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T09%3A21%3A29IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Self-paced%20Ensemble%20for%20Highly%20Imbalanced%20Massive%20Data%20Classification&rft.btitle=2020%20IEEE%2036th%20International%20Conference%20on%20Data%20Engineering%20(ICDE)&rft.au=Liu,%20Zhining&rft.date=2020-04&rft.spage=841&rft.epage=852&rft.pages=841-852&rft.eissn=2375-026X&rft_id=info:doi/10.1109/ICDE48307.2020.00078&rft.eisbn=1728129036&rft.eisbn_list=9781728129037&rft_dat=%3Cieee_CHZPO%3E9101851%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c254t-2212e4141c8596b13ae5538bf227d7a9af46050d036595e0fa4f4300dabe86ca3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9101851&rfr_iscdi=true