Loading…

RLISR: A Deep Reinforcement Learning Based Interactive Service Recommendation Model

An increasing number of services are being offered online, which leads to great difficulties in selecting appropriate services during mashup development. There have been many service recommendation studies and achieved remarkable results to alleviate the issue of service selection challenge. However...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2024, Vol.12, p.90204-90217
Main Authors: Zhang, Mingwei, Qu, Yingjie, Li, Yage, Wen, Xingyu, Zhou, Yi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:An increasing number of services are being offered online, which leads to great difficulties in selecting appropriate services during mashup development. There have been many service recommendation studies and achieved remarkable results to alleviate the issue of service selection challenge. However, they are limited to suggesting services only for a single round or the next round, and ignore the interactive nature in real-world service recommendation scenarios. As a result, existing methods can't capture developers' shifting requirements and obtain the long-term optimal recommendation performance over the whole recommendation process. In this paper, we propose a deep reinforcement learning based interactive service recommendation model (RLISR) to tackle this problem. Specifically, we formulate interaction service recommendation as a multi-round decision-making process, and design a reinforcement learning framework to enable the interactions between mashup developers and service recommender systems. First, we propose a knowledge-graph-based state representation modeling method, wherein we consider both the positive and negative feedbacks of developers. Then, we design an informative reward function from the perspective of boosting recommendation accuracy and reducing the number of recommendation rounds. Finally, we adopt a cascading Q-networks model to cope with the enormous combinational candidate space and learn an optimal recommendation policy. Extensive experiments conducted on a real-world dataset validate the effectiveness of the proposed approach compared to the state-of-the-art service recommendation approaches.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3420395