Loading…

Class-incremental learning with Balanced Embedding Discrimination Maximization

Class incremental learning is committed to solving representation learning and classification assignments while avoiding catastrophic forgetting in scenarios where categories are increasing. In this work, a unified method named Balanced Embedding Discrimination Maximization (BEDM) is developed to ma...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks 2024-11, Vol.179, p.106487, Article 106487
Main Authors: Wei, Qinglai, Zhang, Weiqin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c311t-cb0a24e85b55c3078f8cf61135d37c31d59932faea103865c9fd63b1c2eb0af43
container_end_page
container_issue
container_start_page 106487
container_title Neural networks
container_volume 179
creator Wei, Qinglai
Zhang, Weiqin
description Class incremental learning is committed to solving representation learning and classification assignments while avoiding catastrophic forgetting in scenarios where categories are increasing. In this work, a unified method named Balanced Embedding Discrimination Maximization (BEDM) is developed to make the intermediate embedding more distinctive. Specifically, we utilize an orthogonality constraint based on doubly-blocked Toeplitz matrix to minimize the correlation of convolution kernels, and an algorithm for similarity visualization is introduced. Furthermore, uneven samples and distribution shift among old and new tasks eventuate strongly biased classifiers. To mitigate the imbalance, we propose an adaptive balance weighting in softmax to compensate insufficient categories dynamically. In addition, hybrid embedding learning is introduced to preserve knowledge from old models, which involves less hyper-parameters than conventional knowledge distillation. Our proposed method outperforms the existing approaches on three mainstream benchmark datasets. Moreover, we technically visualize that our method can produce a more uniform similarity histogram and more stable spectrum. Grad-CAM and t-SNE visualizations further confirm its effectiveness. Code is available at https://github.com/wqzh/BEDM.
doi_str_mv 10.1016/j.neunet.2024.106487
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_3078714909</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608024004118</els_id><sourcerecordid>3078714909</sourcerecordid><originalsourceid>FETCH-LOGICAL-c311t-cb0a24e85b55c3078f8cf61135d37c31d59932faea103865c9fd63b1c2eb0af43</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EoqXwBwhlySbFjhPH3iBBKQ-JxwbWluNMwFXiFDvh9fU4pLBkNZrRvTN3DkKHBM8JJuxkNbfQW-jmCU7SMGIpz7fQlPBcxEnOk200xVzQmGGOJ2jP-xXGmPGU7qIJ5YIzwvkU3S9q5X1srHbQgO1UHdWgnDX2OXo33Ut0rmplNZTRsimgLIf5hfHamcZY1ZnWRnfqIzRfP80-2qlU7eFgU2fo6XL5uLiObx-ubhZnt7GmhHSxLrBKUuBZkWWa4pxXXFeMEJqVNA-SMhOCJpUCRTDlLNOiKhktiE4gOKuUztDxuHft2tcefCebEArqkBXa3sthZ05SgUWQpqNUu9Z7B5Vch_DKfUqC5UBSruRIUg4k5Ugy2I42F_qigfLP9IsuCE5HAYQ_3ww46bWBAZVxoDtZtub_C98iQ4dM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3078714909</pqid></control><display><type>article</type><title>Class-incremental learning with Balanced Embedding Discrimination Maximization</title><source>ScienceDirect Freedom Collection</source><creator>Wei, Qinglai ; Zhang, Weiqin</creator><creatorcontrib>Wei, Qinglai ; Zhang, Weiqin</creatorcontrib><description>Class incremental learning is committed to solving representation learning and classification assignments while avoiding catastrophic forgetting in scenarios where categories are increasing. In this work, a unified method named Balanced Embedding Discrimination Maximization (BEDM) is developed to make the intermediate embedding more distinctive. Specifically, we utilize an orthogonality constraint based on doubly-blocked Toeplitz matrix to minimize the correlation of convolution kernels, and an algorithm for similarity visualization is introduced. Furthermore, uneven samples and distribution shift among old and new tasks eventuate strongly biased classifiers. To mitigate the imbalance, we propose an adaptive balance weighting in softmax to compensate insufficient categories dynamically. In addition, hybrid embedding learning is introduced to preserve knowledge from old models, which involves less hyper-parameters than conventional knowledge distillation. Our proposed method outperforms the existing approaches on three mainstream benchmark datasets. Moreover, we technically visualize that our method can produce a more uniform similarity histogram and more stable spectrum. Grad-CAM and t-SNE visualizations further confirm its effectiveness. Code is available at https://github.com/wqzh/BEDM.</description><identifier>ISSN: 0893-6080</identifier><identifier>ISSN: 1879-2782</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2024.106487</identifier><identifier>PMID: 38986188</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Algorithms ; Bias mitigation ; Class incremental learning ; Feature independence ; Humans ; Machine Learning ; Neural Networks, Computer ; Orthogonality</subject><ispartof>Neural networks, 2024-11, Vol.179, p.106487, Article 106487</ispartof><rights>2024 Elsevier Ltd</rights><rights>Copyright © 2024 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c311t-cb0a24e85b55c3078f8cf61135d37c31d59932faea103865c9fd63b1c2eb0af43</cites><orcidid>0000-0001-7002-9800</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38986188$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Wei, Qinglai</creatorcontrib><creatorcontrib>Zhang, Weiqin</creatorcontrib><title>Class-incremental learning with Balanced Embedding Discrimination Maximization</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>Class incremental learning is committed to solving representation learning and classification assignments while avoiding catastrophic forgetting in scenarios where categories are increasing. In this work, a unified method named Balanced Embedding Discrimination Maximization (BEDM) is developed to make the intermediate embedding more distinctive. Specifically, we utilize an orthogonality constraint based on doubly-blocked Toeplitz matrix to minimize the correlation of convolution kernels, and an algorithm for similarity visualization is introduced. Furthermore, uneven samples and distribution shift among old and new tasks eventuate strongly biased classifiers. To mitigate the imbalance, we propose an adaptive balance weighting in softmax to compensate insufficient categories dynamically. In addition, hybrid embedding learning is introduced to preserve knowledge from old models, which involves less hyper-parameters than conventional knowledge distillation. Our proposed method outperforms the existing approaches on three mainstream benchmark datasets. Moreover, we technically visualize that our method can produce a more uniform similarity histogram and more stable spectrum. Grad-CAM and t-SNE visualizations further confirm its effectiveness. Code is available at https://github.com/wqzh/BEDM.</description><subject>Algorithms</subject><subject>Bias mitigation</subject><subject>Class incremental learning</subject><subject>Feature independence</subject><subject>Humans</subject><subject>Machine Learning</subject><subject>Neural Networks, Computer</subject><subject>Orthogonality</subject><issn>0893-6080</issn><issn>1879-2782</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EoqXwBwhlySbFjhPH3iBBKQ-JxwbWluNMwFXiFDvh9fU4pLBkNZrRvTN3DkKHBM8JJuxkNbfQW-jmCU7SMGIpz7fQlPBcxEnOk200xVzQmGGOJ2jP-xXGmPGU7qIJ5YIzwvkU3S9q5X1srHbQgO1UHdWgnDX2OXo33Ut0rmplNZTRsimgLIf5hfHamcZY1ZnWRnfqIzRfP80-2qlU7eFgU2fo6XL5uLiObx-ubhZnt7GmhHSxLrBKUuBZkWWa4pxXXFeMEJqVNA-SMhOCJpUCRTDlLNOiKhktiE4gOKuUztDxuHft2tcefCebEArqkBXa3sthZ05SgUWQpqNUu9Z7B5Vch_DKfUqC5UBSruRIUg4k5Ugy2I42F_qigfLP9IsuCE5HAYQ_3ww46bWBAZVxoDtZtub_C98iQ4dM</recordid><startdate>202411</startdate><enddate>202411</enddate><creator>Wei, Qinglai</creator><creator>Zhang, Weiqin</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-7002-9800</orcidid></search><sort><creationdate>202411</creationdate><title>Class-incremental learning with Balanced Embedding Discrimination Maximization</title><author>Wei, Qinglai ; Zhang, Weiqin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c311t-cb0a24e85b55c3078f8cf61135d37c31d59932faea103865c9fd63b1c2eb0af43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Bias mitigation</topic><topic>Class incremental learning</topic><topic>Feature independence</topic><topic>Humans</topic><topic>Machine Learning</topic><topic>Neural Networks, Computer</topic><topic>Orthogonality</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wei, Qinglai</creatorcontrib><creatorcontrib>Zhang, Weiqin</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei, Qinglai</au><au>Zhang, Weiqin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Class-incremental learning with Balanced Embedding Discrimination Maximization</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2024-11</date><risdate>2024</risdate><volume>179</volume><spage>106487</spage><pages>106487-</pages><artnum>106487</artnum><issn>0893-6080</issn><issn>1879-2782</issn><eissn>1879-2782</eissn><abstract>Class incremental learning is committed to solving representation learning and classification assignments while avoiding catastrophic forgetting in scenarios where categories are increasing. In this work, a unified method named Balanced Embedding Discrimination Maximization (BEDM) is developed to make the intermediate embedding more distinctive. Specifically, we utilize an orthogonality constraint based on doubly-blocked Toeplitz matrix to minimize the correlation of convolution kernels, and an algorithm for similarity visualization is introduced. Furthermore, uneven samples and distribution shift among old and new tasks eventuate strongly biased classifiers. To mitigate the imbalance, we propose an adaptive balance weighting in softmax to compensate insufficient categories dynamically. In addition, hybrid embedding learning is introduced to preserve knowledge from old models, which involves less hyper-parameters than conventional knowledge distillation. Our proposed method outperforms the existing approaches on three mainstream benchmark datasets. Moreover, we technically visualize that our method can produce a more uniform similarity histogram and more stable spectrum. Grad-CAM and t-SNE visualizations further confirm its effectiveness. Code is available at https://github.com/wqzh/BEDM.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>38986188</pmid><doi>10.1016/j.neunet.2024.106487</doi><orcidid>https://orcid.org/0000-0001-7002-9800</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2024-11, Vol.179, p.106487, Article 106487
issn 0893-6080
1879-2782
1879-2782
language eng
recordid cdi_proquest_miscellaneous_3078714909
source ScienceDirect Freedom Collection
subjects Algorithms
Bias mitigation
Class incremental learning
Feature independence
Humans
Machine Learning
Neural Networks, Computer
Orthogonality
title Class-incremental learning with Balanced Embedding Discrimination Maximization
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-08T17%3A21%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Class-incremental%20learning%20with%20Balanced%20Embedding%20Discrimination%20Maximization&rft.jtitle=Neural%20networks&rft.au=Wei,%20Qinglai&rft.date=2024-11&rft.volume=179&rft.spage=106487&rft.pages=106487-&rft.artnum=106487&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2024.106487&rft_dat=%3Cproquest_cross%3E3078714909%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c311t-cb0a24e85b55c3078f8cf61135d37c31d59932faea103865c9fd63b1c2eb0af43%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3078714909&rft_id=info:pmid/38986188&rfr_iscdi=true