Loading…

Kernel linear regression for face recognition

Linear regression uses the least square algorithm to solve the solution of linear regression equation. Linear regression classification (LRC) shows good classification performance on face image data. However, when the axes of linear regression of class-specific samples have intersections, LRC could...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications 2014-06, Vol.24 (7-8), p.1843-1849
Main Authors: Lu, Yuwu, Fang, Xiaozhao, Xie, Binglei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Linear regression uses the least square algorithm to solve the solution of linear regression equation. Linear regression classification (LRC) shows good classification performance on face image data. However, when the axes of linear regression of class-specific samples have intersections, LRC could not well classify the samples that distribute around intersections. Moreover, the LRC could not perform well at the situation of severe lighting variations. This paper proposes a new classification method, kernel linear regression classification (KLRC), based on LRC and the kernel trick. KLRC is a nonlinear extension of LRC and can offset the drawback of LRC. KLRC implicitly maps the data into a high-dimensional kernel space by using the nonlinear mapping determined by a kernel function. Through this mapping, KLRC is able to make the data more linearly separable and can perform well for face recognition with varying lighting. For comparison, we conduct on three standard databases under some evaluation protocols. The proposed methodology not only outperforms LRC but also takes the better performance than typical kernel methods such as kernel linear discriminant analysis and kernel principal component analysis.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-013-1435-6