Loading…
Simulating and Modeling the Risk of Conversational Search
In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarify...
Saved in:
Published in: | ACM transactions on information systems 2022-10, Vol.40 (4), p.1-33 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions.
In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments. |
---|---|
ISSN: | 1046-8188 1558-2868 |
DOI: | 10.1145/3507357 |