Loading…

K-winner machines for pattern classification

The paper describes the K-winner machine (KWM) model for classification. KWM training uses unsupervised vector quantization and subsequent calibration to label data-space partitions. A K-winner classifier seeks the largest set of best-matching prototypes agreeing on a test pattern, and provides a lo...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2001-03, Vol.12 (2), p.371-385
Main Authors: Ridella, S., Rovetta, S., Zunino, R.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The paper describes the K-winner machine (KWM) model for classification. KWM training uses unsupervised vector quantization and subsequent calibration to label data-space partitions. A K-winner classifier seeks the largest set of best-matching prototypes agreeing on a test pattern, and provides a local-level measure of confidence. A theoretical analysis characterizes the growth function of a K-winner classifier, and the result leads to tight bounds to generalization performance. The method proves suitable for high-dimensional multiclass problems with large amounts of data. Experimental results on both a synthetic and a real domain (NIST handwritten numerals) confirm the approach effectiveness and the consistency of the theoretical framework.
ISSN:1045-9227
2162-237X
1941-0093
2162-2388
DOI:10.1109/72.914531