Loading…
Beyond Positive History: Re-ranking with List-level Hybrid Feedback
As the last stage of recommender systems, re-ranking generates a re-ordered list that aligns with the user's preference. However, previous works generally focus on item-level positive feedback as history (e.g., only clicked items) and ignore that users provide positive or negative feedback on i...
Saved in:
Published in: | arXiv.org 2024-10 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Weng, Muyan Yunjia Xi Liu, Weiwen Chen, Bo Lin, Jianghao Tang, Ruiming Zhang, Weinan Yu, Yong |
description | As the last stage of recommender systems, re-ranking generates a re-ordered list that aligns with the user's preference. However, previous works generally focus on item-level positive feedback as history (e.g., only clicked items) and ignore that users provide positive or negative feedback on items in the entire list. This list-level hybrid feedback can reveal users' holistic preferences and reflect users' comparison behavior patterns manifesting within a list. Such patterns could predict user behaviors on candidate lists, thus aiding better re-ranking. Despite appealing benefits, extracting and integrating preferences and behavior patterns from list-level hybrid feedback into re-ranking multiple items remains challenging. To this end, we propose Re-ranking with List-level Hybrid Feedback (dubbed RELIFE). It captures user's preferences and behavior patterns with three modules: a Disentangled Interest Miner to disentangle the user's preferences into interests and disinterests, a Sequential Preference Mixer to learn users' entangled preferences considering the context of feedback, and a Comparison-aware Pattern Extractor to capture user's behavior patterns within each list. Moreover, for better integration of patterns, contrastive learning is adopted to align the behavior patterns of candidate and historical lists. Extensive experiments show that RELIFE significantly outperforms SOTA re-ranking baselines. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3121792099</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3121792099</sourcerecordid><originalsourceid>FETCH-proquest_journals_31217920993</originalsourceid><addsrcrecordid>eNqNykELgjAYgOERBEn5HwadB_NbZnZMEg8dIrrLzK-aylbbNPz3eegHdHoPzzsjAQgRsd0GYEFC5xrOOWwTiGMRkOyAo9E1PRunvBqQFsp5Y8c9vSCzUrdKP-hH-Sc9TcA6HLCjxVhZVdMcsa7krV2R-V12DsNfl2SdH69ZwV7WvHt0vmxMb_VEpYggSlLgaSr-u77CVzmw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3121792099</pqid></control><display><type>article</type><title>Beyond Positive History: Re-ranking with List-level Hybrid Feedback</title><source>ProQuest - Publicly Available Content Database</source><creator>Weng, Muyan ; Yunjia Xi ; Liu, Weiwen ; Chen, Bo ; Lin, Jianghao ; Tang, Ruiming ; Zhang, Weinan ; Yu, Yong</creator><creatorcontrib>Weng, Muyan ; Yunjia Xi ; Liu, Weiwen ; Chen, Bo ; Lin, Jianghao ; Tang, Ruiming ; Zhang, Weinan ; Yu, Yong</creatorcontrib><description>As the last stage of recommender systems, re-ranking generates a re-ordered list that aligns with the user's preference. However, previous works generally focus on item-level positive feedback as history (e.g., only clicked items) and ignore that users provide positive or negative feedback on items in the entire list. This list-level hybrid feedback can reveal users' holistic preferences and reflect users' comparison behavior patterns manifesting within a list. Such patterns could predict user behaviors on candidate lists, thus aiding better re-ranking. Despite appealing benefits, extracting and integrating preferences and behavior patterns from list-level hybrid feedback into re-ranking multiple items remains challenging. To this end, we propose Re-ranking with List-level Hybrid Feedback (dubbed RELIFE). It captures user's preferences and behavior patterns with three modules: a Disentangled Interest Miner to disentangle the user's preferences into interests and disinterests, a Sequential Preference Mixer to learn users' entangled preferences considering the context of feedback, and a Comparison-aware Pattern Extractor to capture user's behavior patterns within each list. Moreover, for better integration of patterns, contrastive learning is adopted to align the behavior patterns of candidate and historical lists. Extensive experiments show that RELIFE significantly outperforms SOTA re-ranking baselines.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Negative feedback ; Positive feedback ; Preferences ; Ranking ; Recommender systems ; User behavior</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3121792099?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25751,37010,44588</link.rule.ids></links><search><creatorcontrib>Weng, Muyan</creatorcontrib><creatorcontrib>Yunjia Xi</creatorcontrib><creatorcontrib>Liu, Weiwen</creatorcontrib><creatorcontrib>Chen, Bo</creatorcontrib><creatorcontrib>Lin, Jianghao</creatorcontrib><creatorcontrib>Tang, Ruiming</creatorcontrib><creatorcontrib>Zhang, Weinan</creatorcontrib><creatorcontrib>Yu, Yong</creatorcontrib><title>Beyond Positive History: Re-ranking with List-level Hybrid Feedback</title><title>arXiv.org</title><description>As the last stage of recommender systems, re-ranking generates a re-ordered list that aligns with the user's preference. However, previous works generally focus on item-level positive feedback as history (e.g., only clicked items) and ignore that users provide positive or negative feedback on items in the entire list. This list-level hybrid feedback can reveal users' holistic preferences and reflect users' comparison behavior patterns manifesting within a list. Such patterns could predict user behaviors on candidate lists, thus aiding better re-ranking. Despite appealing benefits, extracting and integrating preferences and behavior patterns from list-level hybrid feedback into re-ranking multiple items remains challenging. To this end, we propose Re-ranking with List-level Hybrid Feedback (dubbed RELIFE). It captures user's preferences and behavior patterns with three modules: a Disentangled Interest Miner to disentangle the user's preferences into interests and disinterests, a Sequential Preference Mixer to learn users' entangled preferences considering the context of feedback, and a Comparison-aware Pattern Extractor to capture user's behavior patterns within each list. Moreover, for better integration of patterns, contrastive learning is adopted to align the behavior patterns of candidate and historical lists. Extensive experiments show that RELIFE significantly outperforms SOTA re-ranking baselines.</description><subject>Negative feedback</subject><subject>Positive feedback</subject><subject>Preferences</subject><subject>Ranking</subject><subject>Recommender systems</subject><subject>User behavior</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNykELgjAYgOERBEn5HwadB_NbZnZMEg8dIrrLzK-aylbbNPz3eegHdHoPzzsjAQgRsd0GYEFC5xrOOWwTiGMRkOyAo9E1PRunvBqQFsp5Y8c9vSCzUrdKP-hH-Sc9TcA6HLCjxVhZVdMcsa7krV2R-V12DsNfl2SdH69ZwV7WvHt0vmxMb_VEpYggSlLgaSr-u77CVzmw</recordid><startdate>20241028</startdate><enddate>20241028</enddate><creator>Weng, Muyan</creator><creator>Yunjia Xi</creator><creator>Liu, Weiwen</creator><creator>Chen, Bo</creator><creator>Lin, Jianghao</creator><creator>Tang, Ruiming</creator><creator>Zhang, Weinan</creator><creator>Yu, Yong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241028</creationdate><title>Beyond Positive History: Re-ranking with List-level Hybrid Feedback</title><author>Weng, Muyan ; Yunjia Xi ; Liu, Weiwen ; Chen, Bo ; Lin, Jianghao ; Tang, Ruiming ; Zhang, Weinan ; Yu, Yong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31217920993</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Negative feedback</topic><topic>Positive feedback</topic><topic>Preferences</topic><topic>Ranking</topic><topic>Recommender systems</topic><topic>User behavior</topic><toplevel>online_resources</toplevel><creatorcontrib>Weng, Muyan</creatorcontrib><creatorcontrib>Yunjia Xi</creatorcontrib><creatorcontrib>Liu, Weiwen</creatorcontrib><creatorcontrib>Chen, Bo</creatorcontrib><creatorcontrib>Lin, Jianghao</creatorcontrib><creatorcontrib>Tang, Ruiming</creatorcontrib><creatorcontrib>Zhang, Weinan</creatorcontrib><creatorcontrib>Yu, Yong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Weng, Muyan</au><au>Yunjia Xi</au><au>Liu, Weiwen</au><au>Chen, Bo</au><au>Lin, Jianghao</au><au>Tang, Ruiming</au><au>Zhang, Weinan</au><au>Yu, Yong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Beyond Positive History: Re-ranking with List-level Hybrid Feedback</atitle><jtitle>arXiv.org</jtitle><date>2024-10-28</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>As the last stage of recommender systems, re-ranking generates a re-ordered list that aligns with the user's preference. However, previous works generally focus on item-level positive feedback as history (e.g., only clicked items) and ignore that users provide positive or negative feedback on items in the entire list. This list-level hybrid feedback can reveal users' holistic preferences and reflect users' comparison behavior patterns manifesting within a list. Such patterns could predict user behaviors on candidate lists, thus aiding better re-ranking. Despite appealing benefits, extracting and integrating preferences and behavior patterns from list-level hybrid feedback into re-ranking multiple items remains challenging. To this end, we propose Re-ranking with List-level Hybrid Feedback (dubbed RELIFE). It captures user's preferences and behavior patterns with three modules: a Disentangled Interest Miner to disentangle the user's preferences into interests and disinterests, a Sequential Preference Mixer to learn users' entangled preferences considering the context of feedback, and a Comparison-aware Pattern Extractor to capture user's behavior patterns within each list. Moreover, for better integration of patterns, contrastive learning is adopted to align the behavior patterns of candidate and historical lists. Extensive experiments show that RELIFE significantly outperforms SOTA re-ranking baselines.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3121792099 |
source | ProQuest - Publicly Available Content Database |
subjects | Negative feedback Positive feedback Preferences Ranking Recommender systems User behavior |
title | Beyond Positive History: Re-ranking with List-level Hybrid Feedback |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T03%3A50%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Beyond%20Positive%20History:%20Re-ranking%20with%20List-level%20Hybrid%20Feedback&rft.jtitle=arXiv.org&rft.au=Weng,%20Muyan&rft.date=2024-10-28&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3121792099%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_31217920993%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3121792099&rft_id=info:pmid/&rfr_iscdi=true |