Loading…

Low-Rank Tensor Regularized Graph Fuzzy Learning for Multi-View Data Processing

Multi-view data processing is an effective tool to differentiate the levels of consumers on electronics. Recently, the graph based multi-view clustering methods have attracted widespread attention because they can obtain the relationships of multi-view data points efficiently. However, there exist s...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on consumer electronics 2024-02, Vol.70 (1), p.2925-2938
Main Authors: Pan, Baicheng, Li, Chuandong, Che, Hangjun, Leung, Man-Fai, Yu, Keping
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Multi-view data processing is an effective tool to differentiate the levels of consumers on electronics. Recently, the graph based multi-view clustering methods have attracted widespread attention because they can obtain the relationships of multi-view data points efficiently. However, there exist several shortcomings on most existing graph based clustering methods. Firstly, the mostly adopted Euclidean distance can not extract the nonlinear manifold structure. Secondly, graph based methods are mainly hard clustering methods, which means that each data point belongs to only the one cluster exactly. Thirdly, the high-dimension information between multiple views are not taken into account. Thus, a low-rank tensor regularized graph fuzzy learning (LRTGFL) method for multi-view data processing is proposed. In LRTGFL, Jensen-Shannon divergence is adopted to replace the Euclidean distance for obtaining more completely nonlinear structures. In addition, fuzzy learning is adopted to make graph clustering be a soft clustering method. Furthermore, a tensor nuclear norm based on the tensor singular value decomposition (t-SVD) is adopted to take advantage of the high-dimension information. Then, alternating direction method of multipliers (ADMM) is adopted to solve the LRTGFL model. Finally, the effectiveness and superiority of LRTGFL are demonstrated by comparing with various state-of-the-art algorithms on eight real-world datasets.
ISSN:0098-3063
1558-4127
DOI:10.1109/TCE.2023.3301067