Loading…

Exploring Topic Difficulty in Information Retrieval Systems Evaluation

Experimental or relevance assessment cost as well as reliability of an information retrieval (IR) evaluation is highly correlated to the number of topics used. The need of many assessors to produce equivalent large relevance judgments often incurs high cost and time. So, large number of topics in re...

Full description

Saved in:
Bibliographic Details
Published in:Journal of physics. Conference series 2019-12, Vol.1339 (1), p.12019
Main Authors: Ting Pang, Wei, Rajagopal, Prabha, Wang, Mengjia, Zhang, Shuxiang, Devi Ravana, Sri
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Experimental or relevance assessment cost as well as reliability of an information retrieval (IR) evaluation is highly correlated to the number of topics used. The need of many assessors to produce equivalent large relevance judgments often incurs high cost and time. So, large number of topics in retrieval experiment is not practical and economical. This experiment proposes an approach to identify most effective topics in evaluating IR systems with regards to topic difficulty. The proposed approach is capable of identifying which topics and topic set size are reliable when evaluating system effectiveness. Easy topics appeared to be most suitable for effectively evaluating IR systems.
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/1339/1/012019