Loading…

MRI: meaningful interpretations of collaborative ratings

Collaborative rating sites have become essential resources that many users consult to make purchasing decisions on various items. Ideally, a user wants to quickly decide whether an item is desirable, especially when many choices are available. In practice, however, a user either spends a lot of time...

Full description

Saved in:
Bibliographic Details
Published in:Proceedings of the VLDB Endowment 2011-08, Vol.4 (11), p.1063-1074
Main Authors: Das, Mahashweta, Amer-Yahia, Sihem, Das, Gautam, Yu, Cong
Format: Article
Language:English
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Collaborative rating sites have become essential resources that many users consult to make purchasing decisions on various items. Ideally, a user wants to quickly decide whether an item is desirable, especially when many choices are available. In practice, however, a user either spends a lot of time examining reviews before making an informed decision, or simply trusts overall rating aggregations associated with an item. In this paper, we argue that neither option is satisfactory and propose a novel and powerful third option, Meaningful Ratings Interpretation (MRI) , that automatically provides a meaningful interpretation of ratings associated with the input items. As a simple example, given the movie "Usual Suspects," instead of simply showing the average rating of 8.7 from all reviewers, MRI produces a set of meaningful factoids such as "male reviewers under 30 from NYC love this movie". We define the notion of meaningful interpretation based on the idea of data cube, and formalize two important sub-problems, meaningful description mining and meaningful difference mining. We show that these problems are NP-hard and design randomized hill exploration algorithms to solve them efficiently. We conduct user studies to show that MRI provides more helpful information to users than simple average ratings. Performance evaluation over real data shows that our algorithms perform much faster and generate equally good interpretations as brute-force algorithms.
ISSN:2150-8097
2150-8097
DOI:10.14778/3402707.3402742