Loading…

A dynamic hypergraph regularized non-negative tucker decomposition framework for multiway data analysis

Non-negative tensor decomposition has achieved significant success in machine learning due to its superiority in extracting the non-negative parts-based features and physically meaningful latent components from high-order data. To improve its representation ability, hypergraph has been incorporated...

Full description

Saved in:
Bibliographic Details
Published in:International journal of machine learning and cybernetics 2022-12, Vol.13 (12), p.3691-3710
Main Authors: Huang, Zhenhao, Zhou, Guoxu, Qiu, Yuning, Yu, Yuyuan, Dai, Haolei
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Non-negative tensor decomposition has achieved significant success in machine learning due to its superiority in extracting the non-negative parts-based features and physically meaningful latent components from high-order data. To improve its representation ability, hypergraph has been incorporated into the tensor decomposition model to capture the nonlinear manifold structure of data. However, previous hypergraph regularized tensor decomposition methods rely on the original data space. This may result in inaccurate manifold structure and representation performance degeneration when original data suffer from noise corruption. To solve these problems, in this paper, we propose a dynamic hypergraph regularized non-negative Tucker decomposition (DHNTD) method for multiway data analysis. Specifically, to take full advantage of the multilinear structure and nonlinear manifold of tensor data, we learn the dynamic hypergraph and non-negative low-dimensional representation in a unified framework. Moreover, we develop a multiplicative update (MU) algorithm to solve our optimization problem and theoretically prove its convergence. Experimental results in clustering tasks using six image datasets demonstrate the superiority of our proposed method compared with the state-of-the-art methods.
ISSN:1868-8071
1868-808X
DOI:10.1007/s13042-022-01620-9