Loading…
Kernel-Based Sparse Representation Learning With Global and Local Low-Rank Label Constraint
Due to the large-scale and multiscale natures of social media data, sparse representation (SR) learning methods are widely followed. However, there are three problems associated with the existing SR methods: 1) they neglect the fact that the semantic features of data may change during iterative lear...
Saved in:
Published in: | IEEE transactions on computational social systems 2024-02, Vol.11 (1), p.488-502 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Due to the large-scale and multiscale natures of social media data, sparse representation (SR) learning methods are widely followed. However, there are three problems associated with the existing SR methods: 1) they neglect the fact that the semantic features of data may change during iterative learning, which leads to weak semantic learning; 2) they often assume that the data are linearly separable, while the data might be nonlinear in many real-world applications; and 3) they cannot ensure the low-rank and discriminative properties of the data at the same time and might neglect the global properties of the data, leading to suboptimal solutions. To solve these problems, we propose a novel method, named kernel-based SR learning with global and local low-rank label (KSR-GL3) constraint, which strengthens the semantic information and ensures the semantic features invariant during learning. First, we map the data into a high-dimensional feature space to learn the linear representation of samples. Second, global and local low-rank label (GL3) constraint is used to ensure the semantic invariance, low-rankness, and discrimination of features during learning. Third, an \ell _{2,1} is imposed to explore the sparseness of the subspace. Mathematical analyses show that GL3 can retain the intrinsic properties of data during learning. By combining the above three components, a generalized power iteration (GPI) approach is applied to build the model and deal with the tricky optimization problem. By KSR-GL 3, a sparse, low-rank, and discriminative subspace is produced from the high-dimensional and orthogonal representation of the data under the guidance of semantics, while the intrinsic properties of data are preserved. Extensive experiments on six datasets compared with five advanced algorithms demonstrate its promising prospects. |
---|---|
ISSN: | 2329-924X 2373-7476 |
DOI: | 10.1109/TCSS.2022.3227406 |