Loading…

Learning Unsupervised and Supervised Representations via General Covariance

Component analysis (CA) is a powerful technique for learning discriminative representations in various computer vision tasks. Typical CA methods are essentially based on the covariance matrix of training data. But, the covariance matrix has obvious disadvantages such as failing to model complex rela...

Full description

Saved in:
Bibliographic Details
Published in:IEEE signal processing letters 2021, Vol.28, p.145-149
Main Authors: Yuan, Yun-Hao, Li, Jin, Li, Yun, Gou, Jianping, Qiang, Jipeng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Component analysis (CA) is a powerful technique for learning discriminative representations in various computer vision tasks. Typical CA methods are essentially based on the covariance matrix of training data. But, the covariance matrix has obvious disadvantages such as failing to model complex relationship among features and singularity in small sample size cases. In this letter, we propose a general covariance measure to achieve better data representations. The proposed covariance is characterized by a nonlinear mapping determined by domain-specific applications, thus leading to more advantages, flexibility, and applicability in practice. With general covariance, we further present two novel CA methods for learning compact representations and discuss their differences from conventional methods. A series of experimental results on nine benchmark data sets demonstrate the effectiveness of the proposed methods in terms of accuracy.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2020.3044026