Loading…

An α-β-Divergence-Generalized Recommender for Highly Accurate Predictions of Missing User Preferences

To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorith...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on cybernetics 2022-08, Vol.52 (8), p.8006-8018
Main Authors: Shang, Mingsheng, Yuan, Ye, Luo, Xin, Zhou, MengChu
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To quantify user-item preferences, a recommender system (RS) commonly adopts a high-dimensional and sparse (HiDS) matrix. Such a matrix can be represented by a non-negative latent factor analysis model relying on a single latent factor (LF)-dependent, non-negative, and multiplicative update algorithm. However, existing models' representative abilities are limited due to their specialized learning objective. To address this issue, this study proposes an \alpha - \beta -divergence-generalized model that enjoys fast convergence. Its ideas are three-fold: 1) generalizing its learning objective with \alpha - \beta -divergence to achieve highly accurate representation of HiDS data; 2) incorporating a generalized momentum method into parameter learning for fast convergence; and 3) implementing self-adaptation of controllable hyperparameters for excellent practicability. Empirical studies on six HiDS matrices from real RSs demonstrate that compared with state-of-the-art LF models, the proposed one achieves significant accuracy and efficiency gain to estimate huge missing data in an HiDS matrix.
ISSN:2168-2267
2168-2275
DOI:10.1109/TCYB.2020.3026425