Loading…
Incremental learning for Lagrangian ε-twin support vector regression
This paper investigates the online learning problem of Lagrangian ε -twin support vector regression (L- ε -TSVR), with the goal of presenting incremental implementations. First, to solve the problem that the existing L- ε -TSVR cannot efficiently update the model under incremental scenarios, an incr...
Saved in:
Published in: | Soft computing (Berlin, Germany) Germany), 2023-05, Vol.27 (9), p.5357-5375 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This paper investigates the online learning problem of Lagrangian
ε
-twin support vector regression (L-
ε
-TSVR), with the goal of presenting incremental implementations. First, to solve the problem that the existing L-
ε
-TSVR cannot efficiently update the model under incremental scenarios, an incremental Lagrangian
ε
-twin support vector regression (IL-
ε
-TSVR) based on the semi-smooth Newton method is proposed. By utilizing the matrix inverse theorems to update the Hessian matrices incrementally, IL-
ε
-TSVR lowers the time complexity and expedites the training process. However, when solving the problem of nonlinear case, the training speed of IL-
ε
-TSVR rapidly decreases with the increasing size of the kernel matrix. Therefore, an incremental reduced Lagrangian
ε
-twin support vector regression (IRL-
ε
-TSVR) is presented. IRL-
ε
-TSVR employs the reduced technique to restrict the size of the inverse matrix at the cost of slightly lower the prediction accuracy. Next, to lighten the prediction accuracy loss caused by parameters reduction, a novel regularization term is introduced to replace the original one, and an improved incremental reduced Lagrangian
ε
-twin support vector regression (IIRL-
ε
-TSVR) is designed. The results on UCI benchmark datasets show that IL-
ε
-TSVR can effectively address the linear regression problem under incremental scenarios and obtain almost the same generalization capability as offline learning. Moreover, IRL-
ε
-TSVR and IIRL-
ε
-TSVR can reduce training time of nonlinear regression model and obtain sparse solution, and their generalization capabilities are close to those of offline ones. Particularly, the proposed algorithms can implement fast incremental learning of large-scale data. |
---|---|
ISSN: | 1432-7643 1433-7479 |
DOI: | 10.1007/s00500-022-07755-9 |