Loading…
Fast optimization of PNN based on center neighbor and KLT
Probabilistic Neural Networks (PNN) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of PNN is that it requires one node or neuron for each training sample. Various clustering techniques have been proposed to reduce this...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Probabilistic Neural Networks (PNN) learn quickly from examples in one pass and asymptotically achieve the Bayes-optimal decision boundaries. The major disadvantage of PNN is that it requires one node or neuron for each training sample. Various clustering techniques have been proposed to reduce this requirement to one node per cluster center. A new fast optimization of PNN is investigated here using iteratively computing the centers of each class samples unrecognized and add their nearest neighbors to pattern layer. For fast constructing the classification model, weight and incremental technique is introduced to improve the learning speed. To further decrease the structure of PNN, KL transform is adopted to compress feature dimension. The approach proposed here decreases redundancy not only in samples using nearest neighbor but also in features using KL transformation. Experiments on UCI show the appropriate tradeoff in training time and generalization ability. |
---|---|
ISSN: | 2160-133X |
DOI: | 10.1109/ICMLC.2010.5580835 |