Loading…

Reinforced Path Reasoning for Counterfactual Explainable Recommendation

Counterfactual explanations interpret the recommendation mechanism by exploring how minimal alterations on items or users affect recommendation decisions. Existing counterfactual explainable approaches face huge search space, and their explanations are either action-based (e.g., user click) or aspec...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on knowledge and data engineering 2024-07, Vol.36 (7), p.3443-3459
Main Authors: Wang, Xiangmeng, Li, Qian, Yu, Dianer, Li, Qing, Xu, Guandong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Counterfactual explanations interpret the recommendation mechanism by exploring how minimal alterations on items or users affect recommendation decisions. Existing counterfactual explainable approaches face huge search space, and their explanations are either action-based (e.g., user click) or aspect-based (i.e., item description). We believe item attribute-based explanations are more intuitive and persuadable for users since they explain by fine-grained demographic features, e.g., brand. Moreover, counterfactual explanations could enhance recommendations by filtering out negative items. In this work, we propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations meanwhile to boost recommendation performance. Our CERec optimizes an explanation policy upon uniformly searching candidate counterfactuals within a reinforcement learning environment. We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph. We also deploy the explanation policy to a recommendation model to enhance the recommendation. Extensive explainability and recommendation evaluations demonstrate CERec 's ability to provide explanations consistent with user preferences and maintain improved recommendations.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3354077