Loading…

Simulating and Modeling the Risk of Conversational Search

In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarify...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on information systems 2022-10, Vol.40 (4), p.1-33
Main Authors: Wang, Zhenduo, Ai, Qingyao
Format: Article
Language:English
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63
cites cdi_FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63
container_end_page 33
container_issue 4
container_start_page 1
container_title ACM transactions on information systems
container_volume 40
creator Wang, Zhenduo
Ai, Qingyao
description In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions.   In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.
doi_str_mv 10.1145/3507357
format article
fullrecord <record><control><sourceid>crossref</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3507357</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1145_3507357</sourcerecordid><originalsourceid>FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63</originalsourceid><addsrcrecordid>eNotj01LxDAURYMoOI7iX8jOVfW95qMvSynqCCOCo-uSpolT7TSSVMF_bwdnde-By4XD2CXCNaJUN0JBJVR1xBaoFBUlaTqeO0hdEBKdsrOcPwBm1rBgZtPvvgc79eM7t2PHn2Lnhz1MW89f-vzJY-B1HH98yvMqjnbgG2-T256zk2CH7C8OuWRv93ev9apYPz881rfrwpWKpkJjMOhAUbBUEToHwVDoZIkBgm3JgTDeaS2sLaVH18rWSAetoU4JZbVYsqv_X5dizsmH5iv1O5t-G4Rmb9wcjMUfnY1HiA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Simulating and Modeling the Risk of Conversational Search</title><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Wang, Zhenduo ; Ai, Qingyao</creator><creatorcontrib>Wang, Zhenduo ; Ai, Qingyao</creatorcontrib><description>In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions.   In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.</description><identifier>ISSN: 1046-8188</identifier><identifier>EISSN: 1558-2868</identifier><identifier>DOI: 10.1145/3507357</identifier><language>eng</language><ispartof>ACM transactions on information systems, 2022-10, Vol.40 (4), p.1-33</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63</citedby><cites>FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63</cites><orcidid>0000-0003-1228-7080 ; 0000-0002-5030-709X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Wang, Zhenduo</creatorcontrib><creatorcontrib>Ai, Qingyao</creatorcontrib><title>Simulating and Modeling the Risk of Conversational Search</title><title>ACM transactions on information systems</title><description>In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions.   In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.</description><issn>1046-8188</issn><issn>1558-2868</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNotj01LxDAURYMoOI7iX8jOVfW95qMvSynqCCOCo-uSpolT7TSSVMF_bwdnde-By4XD2CXCNaJUN0JBJVR1xBaoFBUlaTqeO0hdEBKdsrOcPwBm1rBgZtPvvgc79eM7t2PHn2Lnhz1MW89f-vzJY-B1HH98yvMqjnbgG2-T256zk2CH7C8OuWRv93ev9apYPz881rfrwpWKpkJjMOhAUbBUEToHwVDoZIkBgm3JgTDeaS2sLaVH18rWSAetoU4JZbVYsqv_X5dizsmH5iv1O5t-G4Rmb9wcjMUfnY1HiA</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Wang, Zhenduo</creator><creator>Ai, Qingyao</creator><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-1228-7080</orcidid><orcidid>https://orcid.org/0000-0002-5030-709X</orcidid></search><sort><creationdate>20221001</creationdate><title>Simulating and Modeling the Risk of Conversational Search</title><author>Wang, Zhenduo ; Ai, Qingyao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Zhenduo</creatorcontrib><creatorcontrib>Ai, Qingyao</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on information systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Zhenduo</au><au>Ai, Qingyao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Simulating and Modeling the Risk of Conversational Search</atitle><jtitle>ACM transactions on information systems</jtitle><date>2022-10-01</date><risdate>2022</risdate><volume>40</volume><issue>4</issue><spage>1</spage><epage>33</epage><pages>1-33</pages><issn>1046-8188</issn><eissn>1558-2868</eissn><abstract>In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far from perfect, it is possible and common that they could retrieve/generate bad clarifying questions. Asking too many clarifying questions can also drain a user’s patience when the user prefers searching efficiency over correctness. Hence, these models can backfire and harm a user’s search experience due to these risks from asking clarifying questions.   In this work, we propose a simulation framework to simulate the risk of asking questions in conversational search and further revise a risk-aware conversational search model to control the risk. We show the model’s robustness and effectiveness through extensive experiments on three conversational datasets — MSDialog, Ubuntu Dialog Corpus, and Opendialkg — in which we compare it with multiple baselines. We show that the risk-control module can work with two different re-ranker models and outperform all of the baselines in most of our experiments.</abstract><doi>10.1145/3507357</doi><tpages>33</tpages><orcidid>https://orcid.org/0000-0003-1228-7080</orcidid><orcidid>https://orcid.org/0000-0002-5030-709X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1046-8188
ispartof ACM transactions on information systems, 2022-10, Vol.40 (4), p.1-33
issn 1046-8188
1558-2868
language eng
recordid cdi_crossref_primary_10_1145_3507357
source Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
title Simulating and Modeling the Risk of Conversational Search
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T13%3A39%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Simulating%20and%20Modeling%20the%20Risk%20of%20Conversational%20Search&rft.jtitle=ACM%20transactions%20on%20information%20systems&rft.au=Wang,%20Zhenduo&rft.date=2022-10-01&rft.volume=40&rft.issue=4&rft.spage=1&rft.epage=33&rft.pages=1-33&rft.issn=1046-8188&rft.eissn=1558-2868&rft_id=info:doi/10.1145/3507357&rft_dat=%3Ccrossref%3E10_1145_3507357%3C/crossref%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c258t-61f91c058fa8781cc0f98fd421f0fab8c039ec663aa24e1cb4b94c0b98d535a63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true