Loading…

Neurodynamical classifiers with low model complexity

The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik–Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik’s risk formula indicates that models with smaller VC d...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks 2020-12, Vol.132, p.405-415
Main Authors: Pant, Himanshu, Soman, Sumit, Jayadeva, Bhaya, Amit
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63
cites cdi_FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63
container_end_page 415
container_issue
container_start_page 405
container_title Neural networks
container_volume 132
creator Pant, Himanshu
Soman, Sumit
Jayadeva
Bhaya, Amit
description The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik–Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik’s risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.
doi_str_mv 10.1016/j.neunet.2020.08.013
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2448638607</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608020303026</els_id><sourcerecordid>2448638607</sourcerecordid><originalsourceid>FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EoqXwBwh1ySbBrzrOBglVvKQKNrC2HHsiXCVxsRNK_x5XKSxZjUZz78ydg9AlwTnBRNys8w6GDvqcYopzLHNM2BGaElmUGS0kPUZTLEuWCSzxBJ3FuMYYC8nZKZowhgkRBZki_gJD8HbX6dYZ3cxNo2N0tYMQ51vXf8wbv5233kIa-XbTwLfrd-fopNZNhItDnaH3h_u35VO2en18Xt6tMsME7TOqueTaLgSvmK3B8kXKA5UpytKY0jBaW0uMNbqqrE69ZGWKRQmxheFFLdgMXY97N8F_DhB71bpooGl0B36IinIuBZMCF0nKR6kJPsYAtdoE1-qwUwSrPS-1ViMvteelsFSJV7JdHS4MVQv2z_QLKAluRwGkP78SFhWNg86AdQFMr6x3_1_4AYIWfpI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2448638607</pqid></control><display><type>article</type><title>Neurodynamical classifiers with low model complexity</title><source>Elsevier</source><creator>Pant, Himanshu ; Soman, Sumit ; Jayadeva ; Bhaya, Amit</creator><creatorcontrib>Pant, Himanshu ; Soman, Sumit ; Jayadeva ; Bhaya, Amit</creatorcontrib><description>The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik–Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik’s risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2020.08.013</identifier><identifier>PMID: 33011671</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Classification ; Linear programming ; Minimal Complexity Machine ; Neural network ; Neural Networks, Computer ; Support Vector Machine ; VC dimension</subject><ispartof>Neural networks, 2020-12, Vol.132, p.405-415</ispartof><rights>2020 Elsevier Ltd</rights><rights>Copyright © 2020 Elsevier Ltd. All rights reserved.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63</citedby><cites>FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63</cites><orcidid>0000-0002-3144-1242 ; 0000-0003-1926-3065</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33011671$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Pant, Himanshu</creatorcontrib><creatorcontrib>Soman, Sumit</creatorcontrib><creatorcontrib>Jayadeva</creatorcontrib><creatorcontrib>Bhaya, Amit</creatorcontrib><title>Neurodynamical classifiers with low model complexity</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik–Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik’s risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.</description><subject>Classification</subject><subject>Linear programming</subject><subject>Minimal Complexity Machine</subject><subject>Neural network</subject><subject>Neural Networks, Computer</subject><subject>Support Vector Machine</subject><subject>VC dimension</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><recordid>eNp9kMtOwzAQRS0EoqXwBwh1ySbBrzrOBglVvKQKNrC2HHsiXCVxsRNK_x5XKSxZjUZz78ydg9AlwTnBRNys8w6GDvqcYopzLHNM2BGaElmUGS0kPUZTLEuWCSzxBJ3FuMYYC8nZKZowhgkRBZki_gJD8HbX6dYZ3cxNo2N0tYMQ51vXf8wbv5233kIa-XbTwLfrd-fopNZNhItDnaH3h_u35VO2en18Xt6tMsME7TOqueTaLgSvmK3B8kXKA5UpytKY0jBaW0uMNbqqrE69ZGWKRQmxheFFLdgMXY97N8F_DhB71bpooGl0B36IinIuBZMCF0nKR6kJPsYAtdoE1-qwUwSrPS-1ViMvteelsFSJV7JdHS4MVQv2z_QLKAluRwGkP78SFhWNg86AdQFMr6x3_1_4AYIWfpI</recordid><startdate>202012</startdate><enddate>202012</enddate><creator>Pant, Himanshu</creator><creator>Soman, Sumit</creator><creator>Jayadeva</creator><creator>Bhaya, Amit</creator><general>Elsevier Ltd</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-3144-1242</orcidid><orcidid>https://orcid.org/0000-0003-1926-3065</orcidid></search><sort><creationdate>202012</creationdate><title>Neurodynamical classifiers with low model complexity</title><author>Pant, Himanshu ; Soman, Sumit ; Jayadeva ; Bhaya, Amit</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Classification</topic><topic>Linear programming</topic><topic>Minimal Complexity Machine</topic><topic>Neural network</topic><topic>Neural Networks, Computer</topic><topic>Support Vector Machine</topic><topic>VC dimension</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Pant, Himanshu</creatorcontrib><creatorcontrib>Soman, Sumit</creatorcontrib><creatorcontrib>Jayadeva</creatorcontrib><creatorcontrib>Bhaya, Amit</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Pant, Himanshu</au><au>Soman, Sumit</au><au>Jayadeva</au><au>Bhaya, Amit</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neurodynamical classifiers with low model complexity</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2020-12</date><risdate>2020</risdate><volume>132</volume><spage>405</spage><epage>415</epage><pages>405-415</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane classifier by minimizing an upper bound on the Vapnik–Chervonenkis (VC) dimension. The VC dimension measures the capacity or model complexity of a learning machine. Vapnik’s risk formula indicates that models with smaller VC dimension are expected to show improved generalization. On many benchmark datasets, the MCM generalizes better than SVMs and uses far fewer support vectors than the number used by SVMs. In this paper, we describe a neural network that converges to the MCM solution. We employ the MCM neurodynamical system as the final layer of a neural network architecture. Our approach also optimizes the weights of all layers in order to minimize the objective, which is a combination of a bound on the VC dimension and the classification error. We illustrate the use of this model for robust binary and multi-class classification. Numerical experiments on benchmark datasets from the UCI repository show that the proposed approach is scalable and accurate, and learns models with improved accuracies and fewer support vectors.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>33011671</pmid><doi>10.1016/j.neunet.2020.08.013</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0002-3144-1242</orcidid><orcidid>https://orcid.org/0000-0003-1926-3065</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2020-12, Vol.132, p.405-415
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2448638607
source Elsevier
subjects Classification
Linear programming
Minimal Complexity Machine
Neural network
Neural Networks, Computer
Support Vector Machine
VC dimension
title Neurodynamical classifiers with low model complexity
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T21%3A22%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neurodynamical%20classifiers%20with%20low%20model%20complexity&rft.jtitle=Neural%20networks&rft.au=Pant,%20Himanshu&rft.date=2020-12&rft.volume=132&rft.spage=405&rft.epage=415&rft.pages=405-415&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2020.08.013&rft_dat=%3Cproquest_cross%3E2448638607%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c362t-2a484ad564b3dfed45893ebc799cc9c32fdd1cdcabbda9c3839011211d7c47f63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2448638607&rft_id=info:pmid/33011671&rfr_iscdi=true