Loading…
A Mixed Integer Programming approach to maximum margin 0 - 1 loss classification
Many of the most successful classifiers are based on convex surrogate loss functions. However, it is widely accepted that the 0 - 1 loss would be more natural for classification performance evaluation and many surrogate loss functions can be understood as convex approximations to the 0 - 1 loss. The...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Many of the most successful classifiers are based on convex surrogate loss functions. However, it is widely accepted that the 0 - 1 loss would be more natural for classification performance evaluation and many surrogate loss functions can be understood as convex approximations to the 0 - 1 loss. Therefore, in this paper, we attempt to minimize the 0 - 1 loss directly via Mixed Integer Programming and a maximum margin 0 - 1 loss is presented. To test the performance of the proposed loss measurement, two maximum margin 0 - 1 loss classifiers are implemented for binary classification and semi-supervised classification respectively. According to the experiment results of the publicly available UCI datasets, the maximum margin 0 - 1 loss approach has achieved superior performance. Meanwhile, in term of computational efficiency, with the rapid development of Mixed Integer Programming in recent years, the state-of-art solvers can output the global optimum solution of the proposed approach in seconds when the number of training instances N and the dimension of feature space D are relatively small (Empirically N ≤ 100, D ≤ 20). Therefore, it can already be adopted to solve small-scale classification problems in the real world. |
---|---|
ISSN: | 1097-5764 2640-7736 |
DOI: | 10.1109/RADAR.2014.7060267 |