Loading…

Predictive Prompts with Joint Training of Large Language Models for Explainable Recommendation

Large language models have recently gained popularity in various applications due to their ability to generate natural text for complex tasks. Recommendation systems, one of the frequently studied research topics, can be further improved using the capabilities of large language models to track and u...

Full description

Saved in:
Bibliographic Details
Published in:Mathematics (Basel) 2023-10, Vol.11 (20), p.4230
Main Authors: Lin, Ching-Sheng, Tsai, Chung-Nan, Su, Shao-Tang, Jwo, Jung-Sing, Lee, Cheng-Hsiung, Wang, Xin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large language models have recently gained popularity in various applications due to their ability to generate natural text for complex tasks. Recommendation systems, one of the frequently studied research topics, can be further improved using the capabilities of large language models to track and understand user behaviors and preferences. In this research, we aim to build reliable and transparent recommendation system by generating human-readable explanations to help users obtain better insights into the recommended items and gain more trust. We propose a learning scheme to jointly train the rating prediction task and explanation generation task. The rating prediction task learns the predictive representation from the input of user and item vectors. Subsequently, inspired by the recent success of prompt engineering, these predictive representations are served as predictive prompts, which are soft embeddings, to elicit and steer any knowledge behind language models for the explanation generation task. Empirical studies show that the proposed approach achieves competitive results compared with other existing baselines on the public English TripAdvisor dataset of explainable recommendations.
ISSN:2227-7390
2227-7390
DOI:10.3390/math11204230