Loading…

Improving the efficiency of IRWLS SVMs using parallel Cholesky factorization

•This paper proposes a new schema to implement parallel SVMs.•We provide the first parallel implementation of IRWLS procedure to train a SVM.•We provide a new parallel implementation of IRWLS to train a semi-parametric SVM.•Parallel and semi-parametric solutions increase the field of action of SVMs....

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition letters 2016-12, Vol.84, p.91-98
Main Authors: Morales, Roberto Díaz, Vázquez, Ángel Navia
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•This paper proposes a new schema to implement parallel SVMs.•We provide the first parallel implementation of IRWLS procedure to train a SVM.•We provide a new parallel implementation of IRWLS to train a semi-parametric SVM.•Parallel and semi-parametric solutions increase the field of action of SVMs. This paper proposes a new and efficient parallel schema of the Iterative Re-Weighted Least Squares (IRWLS) procedure to solve Support Vector Machines (SVMs). This procedure makes use of a parallel Cholesky decomposition to solve in every iteration the linear systems. In particular, we provide two different solutions, a parallel implementation of the IRWLS procedure (PIRWLS) to solve a full SVM and a new parallel implementation of a semi-parametric model of SVM (PSIRWLS). Both solutions have been implemented for multicore and multiprocessor environments with shared memory. We have benchmarked these algorithms against LibSVM, SVMLight and PS-SVM. Experimental results show that using large datasets, our systems offer better parallelization capabilities and higher speed.
ISSN:0167-8655
1872-7344
DOI:10.1016/j.patrec.2016.08.015