Loading…
Deep Learning for Visual-Features Extraction Based Personalized User Modeling
Personalized Recommender Systems help users to choose relevant resources and items from many choices, which is an important challenge that remains actuality today. In recent years, we have witnessed the success of deep learning in several research areas, such as computer vision, natural language pro...
Saved in:
Published in: | SN computer science 2022-07, Vol.3 (4), p.261, Article 261 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Personalized Recommender Systems help users to choose relevant resources and items from many choices, which is an important challenge that remains actuality today. In recent years, we have witnessed the success of deep learning in several research areas, such as computer vision, natural language processing, and image processing. In this paper, we present a new approach exploiting the images describing items to build a new user’s personalized model. With this aim, we use deep learning to extract and reduction dimensionality of latent features describing images. Then we associate these latents features with user preferences to build the personalized model. This model was used in a Collaborative Filtering (CF) algorithm to make recommendations. Experimentally, to evaluate our approach, we apply our approach on two large real data of differents domains, such as fashion and movies, using fashion data sets from Amazon.com and movies data sets from MovieLens, where we show that the best performance of clothing image is more important than the poster of a movie, which explains that the fashion image has an importance in the preferences of the users. Finally, we compare our results to other approaches based on collaborative filtering algorithms. |
---|---|
ISSN: | 2661-8907 2662-995X 2661-8907 |
DOI: | 10.1007/s42979-022-01131-y |