Loading…

Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models

Recommender systems play a vital role in various online services. However, the insulated nature of training and deploying separately within a specific domain limits their access to open-world knowledge. Recently, the emergence of large language models (LLMs) has shown promise in bridging this gap by...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-12
Main Authors: Yunjia Xi, Liu, Weiwen, Lin, Jianghao, Cai, Xiaoling, Zhu, Hong, Zhu, Jieming, Chen, Bo, Tang, Ruiming, Zhang, Weinan, Zhang, Rui, Yu, Yong
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yunjia Xi
Liu, Weiwen
Lin, Jianghao
Cai, Xiaoling
Zhu, Hong
Zhu, Jieming
Chen, Bo
Tang, Ruiming
Zhang, Weinan
Zhang, Rui
Yu, Yong
description Recommender systems play a vital role in various online services. However, the insulated nature of training and deploying separately within a specific domain limits their access to open-world knowledge. Recently, the emergence of large language models (LLMs) has shown promise in bridging this gap by encoding extensive world knowledge and demonstrating reasoning capability. Nevertheless, previous attempts to directly use LLMs as recommenders have not achieved satisfactory results. In this work, we propose an Open-World Knowledge Augmented Recommendation Framework with Large Language Models, dubbed KAR, to acquire two types of external knowledge from LLMs -- the reasoning knowledge on user preferences and the factual knowledge on items. We introduce factorization prompting to elicit accurate reasoning on user preferences. The generated reasoning and factual knowledge are effectively transformed and condensed into augmented vectors by a hybrid-expert adaptor in order to be compatible with the recommendation task. The obtained vectors can then be directly used to enhance the performance of any recommendation model. We also ensure efficient inference by preprocessing and prestoring the knowledge from the LLM. Extensive experiments show that KAR significantly outperforms the state-of-the-art baselines and is compatible with a wide range of recommendation algorithms. We deploy KAR to Huawei's news and music recommendation platforms and gain a 7\% and 1.7\% improvement in the online A/B test, respectively.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2828087755</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2828087755</sourcerecordid><originalsourceid>FETCH-proquest_journals_28280877553</originalsourceid><addsrcrecordid>eNqNjkEKwjAURIMgWLR3CLguxNTabEUUwYoggu5KML-1Jc2vSUOvb0AP4Goe82YxExLxNF0lYs35jMTOtYwxvsl5lqURedxwlFY5eunBJHe0WtErPLHrwCg5NGjo2AwvejI4alA10K2vgxu-rrLY0ULa0BfS1F4GOKMC7RZkWkntIP7lnCwP-9vumPQW3x7cULborQmq5IILJvI8HPpv9QHAgEI0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2828087755</pqid></control><display><type>article</type><title>Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models</title><source>Publicly Available Content Database</source><creator>Yunjia Xi ; Liu, Weiwen ; Lin, Jianghao ; Cai, Xiaoling ; Zhu, Hong ; Zhu, Jieming ; Chen, Bo ; Tang, Ruiming ; Zhang, Weinan ; Zhang, Rui ; Yu, Yong</creator><creatorcontrib>Yunjia Xi ; Liu, Weiwen ; Lin, Jianghao ; Cai, Xiaoling ; Zhu, Hong ; Zhu, Jieming ; Chen, Bo ; Tang, Ruiming ; Zhang, Weinan ; Zhang, Rui ; Yu, Yong</creatorcontrib><description>Recommender systems play a vital role in various online services. However, the insulated nature of training and deploying separately within a specific domain limits their access to open-world knowledge. Recently, the emergence of large language models (LLMs) has shown promise in bridging this gap by encoding extensive world knowledge and demonstrating reasoning capability. Nevertheless, previous attempts to directly use LLMs as recommenders have not achieved satisfactory results. In this work, we propose an Open-World Knowledge Augmented Recommendation Framework with Large Language Models, dubbed KAR, to acquire two types of external knowledge from LLMs -- the reasoning knowledge on user preferences and the factual knowledge on items. We introduce factorization prompting to elicit accurate reasoning on user preferences. The generated reasoning and factual knowledge are effectively transformed and condensed into augmented vectors by a hybrid-expert adaptor in order to be compatible with the recommendation task. The obtained vectors can then be directly used to enhance the performance of any recommendation model. We also ensure efficient inference by preprocessing and prestoring the knowledge from the LLM. Extensive experiments show that KAR significantly outperforms the state-of-the-art baselines and is compatible with a wide range of recommendation algorithms. We deploy KAR to Huawei's news and music recommendation platforms and gain a 7\% and 1.7\% improvement in the online A/B test, respectively.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Knowledge ; Large language models ; Reasoning ; Recommender systems</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2828087755?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25731,36989,44566</link.rule.ids></links><search><creatorcontrib>Yunjia Xi</creatorcontrib><creatorcontrib>Liu, Weiwen</creatorcontrib><creatorcontrib>Lin, Jianghao</creatorcontrib><creatorcontrib>Cai, Xiaoling</creatorcontrib><creatorcontrib>Zhu, Hong</creatorcontrib><creatorcontrib>Zhu, Jieming</creatorcontrib><creatorcontrib>Chen, Bo</creatorcontrib><creatorcontrib>Tang, Ruiming</creatorcontrib><creatorcontrib>Zhang, Weinan</creatorcontrib><creatorcontrib>Zhang, Rui</creatorcontrib><creatorcontrib>Yu, Yong</creatorcontrib><title>Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models</title><title>arXiv.org</title><description>Recommender systems play a vital role in various online services. However, the insulated nature of training and deploying separately within a specific domain limits their access to open-world knowledge. Recently, the emergence of large language models (LLMs) has shown promise in bridging this gap by encoding extensive world knowledge and demonstrating reasoning capability. Nevertheless, previous attempts to directly use LLMs as recommenders have not achieved satisfactory results. In this work, we propose an Open-World Knowledge Augmented Recommendation Framework with Large Language Models, dubbed KAR, to acquire two types of external knowledge from LLMs -- the reasoning knowledge on user preferences and the factual knowledge on items. We introduce factorization prompting to elicit accurate reasoning on user preferences. The generated reasoning and factual knowledge are effectively transformed and condensed into augmented vectors by a hybrid-expert adaptor in order to be compatible with the recommendation task. The obtained vectors can then be directly used to enhance the performance of any recommendation model. We also ensure efficient inference by preprocessing and prestoring the knowledge from the LLM. Extensive experiments show that KAR significantly outperforms the state-of-the-art baselines and is compatible with a wide range of recommendation algorithms. We deploy KAR to Huawei's news and music recommendation platforms and gain a 7\% and 1.7\% improvement in the online A/B test, respectively.</description><subject>Algorithms</subject><subject>Knowledge</subject><subject>Large language models</subject><subject>Reasoning</subject><subject>Recommender systems</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjkEKwjAURIMgWLR3CLguxNTabEUUwYoggu5KML-1Jc2vSUOvb0AP4Goe82YxExLxNF0lYs35jMTOtYwxvsl5lqURedxwlFY5eunBJHe0WtErPLHrwCg5NGjo2AwvejI4alA10K2vgxu-rrLY0ULa0BfS1F4GOKMC7RZkWkntIP7lnCwP-9vumPQW3x7cULborQmq5IILJvI8HPpv9QHAgEI0</recordid><startdate>20231204</startdate><enddate>20231204</enddate><creator>Yunjia Xi</creator><creator>Liu, Weiwen</creator><creator>Lin, Jianghao</creator><creator>Cai, Xiaoling</creator><creator>Zhu, Hong</creator><creator>Zhu, Jieming</creator><creator>Chen, Bo</creator><creator>Tang, Ruiming</creator><creator>Zhang, Weinan</creator><creator>Zhang, Rui</creator><creator>Yu, Yong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231204</creationdate><title>Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models</title><author>Yunjia Xi ; Liu, Weiwen ; Lin, Jianghao ; Cai, Xiaoling ; Zhu, Hong ; Zhu, Jieming ; Chen, Bo ; Tang, Ruiming ; Zhang, Weinan ; Zhang, Rui ; Yu, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28280877553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Knowledge</topic><topic>Large language models</topic><topic>Reasoning</topic><topic>Recommender systems</topic><toplevel>online_resources</toplevel><creatorcontrib>Yunjia Xi</creatorcontrib><creatorcontrib>Liu, Weiwen</creatorcontrib><creatorcontrib>Lin, Jianghao</creatorcontrib><creatorcontrib>Cai, Xiaoling</creatorcontrib><creatorcontrib>Zhu, Hong</creatorcontrib><creatorcontrib>Zhu, Jieming</creatorcontrib><creatorcontrib>Chen, Bo</creatorcontrib><creatorcontrib>Tang, Ruiming</creatorcontrib><creatorcontrib>Zhang, Weinan</creatorcontrib><creatorcontrib>Zhang, Rui</creatorcontrib><creatorcontrib>Yu, Yong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yunjia Xi</au><au>Liu, Weiwen</au><au>Lin, Jianghao</au><au>Cai, Xiaoling</au><au>Zhu, Hong</au><au>Zhu, Jieming</au><au>Chen, Bo</au><au>Tang, Ruiming</au><au>Zhang, Weinan</au><au>Zhang, Rui</au><au>Yu, Yong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models</atitle><jtitle>arXiv.org</jtitle><date>2023-12-04</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Recommender systems play a vital role in various online services. However, the insulated nature of training and deploying separately within a specific domain limits their access to open-world knowledge. Recently, the emergence of large language models (LLMs) has shown promise in bridging this gap by encoding extensive world knowledge and demonstrating reasoning capability. Nevertheless, previous attempts to directly use LLMs as recommenders have not achieved satisfactory results. In this work, we propose an Open-World Knowledge Augmented Recommendation Framework with Large Language Models, dubbed KAR, to acquire two types of external knowledge from LLMs -- the reasoning knowledge on user preferences and the factual knowledge on items. We introduce factorization prompting to elicit accurate reasoning on user preferences. The generated reasoning and factual knowledge are effectively transformed and condensed into augmented vectors by a hybrid-expert adaptor in order to be compatible with the recommendation task. The obtained vectors can then be directly used to enhance the performance of any recommendation model. We also ensure efficient inference by preprocessing and prestoring the knowledge from the LLM. Extensive experiments show that KAR significantly outperforms the state-of-the-art baselines and is compatible with a wide range of recommendation algorithms. We deploy KAR to Huawei's news and music recommendation platforms and gain a 7\% and 1.7\% improvement in the online A/B test, respectively.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2828087755
source Publicly Available Content Database
subjects Algorithms
Knowledge
Large language models
Reasoning
Recommender systems
title Towards Open-World Recommendation with Knowledge Augmentation from Large Language Models
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T09%3A47%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Towards%20Open-World%20Recommendation%20with%20Knowledge%20Augmentation%20from%20Large%20Language%20Models&rft.jtitle=arXiv.org&rft.au=Yunjia%20Xi&rft.date=2023-12-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2828087755%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28280877553%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2828087755&rft_id=info:pmid/&rfr_iscdi=true