Loading…
Tight bounds for SVM classification error
We find very tight bounds on the accuracy of a support vector machine classification error within the algorithmic inference framework. The framework is specially suitable for this kind of classifier since (i) we know the number of support vectors really employed, as an ancillary output of the learni...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | We find very tight bounds on the accuracy of a support vector machine classification error within the algorithmic inference framework. The framework is specially suitable for this kind of classifier since (i) we know the number of support vectors really employed, as an ancillary output of the learning procedure, and (ii) we can appreciate confidence intervals of misclassifying probability exactly in function of the cardinality of these vectors. As a result we obtain confidence intervals that are up to an order narrower than those supplied in the literature, having a slight different meaning due to the different approach they come from, but the same operational function. We numerically check the covering of these intervals |
---|---|
DOI: | 10.1109/ICNNB.2005.1614556 |