Loading…

Pick the Better and Leave the Rest: Leveraging Multiple Retrieved Results to Guide Response Generation

Significant progress has been made on the Neural Response Generation (NRG) task. However, there still exist great challenges on results' diversity. Compared to the End-to-End generation architecture, the retrieval-based models can usually provide responses with better diversity, although the re...

Full description

Saved in:
Bibliographic Details
Published in:IEEE/ACM transactions on audio, speech, and language processing speech, and language processing, 2024, p.1-13
Main Authors: Wu, Bowen, Deng, Yunhan, Su, Donghang, Xiang, Jianyu, Yang, Chao, Wang, Zongsheng, Li, Ying, Huang, Junhong, Wang, Baoxun
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Significant progress has been made on the Neural Response Generation (NRG) task. However, there still exist great challenges on results' diversity. Compared to the End-to-End generation architecture, the retrieval-based models can usually provide responses with better diversity, although the relevance of results is more difficult to be guaranteed. Consequently, it is natural and reasonable to leverage the advantage of retrieval-based conversation systems to enhance NRG ones so as to generate responses with satisfying diversity and controllable relevance. This paper proposes a deep neural framework to adopt and utilize multiple retrieved responses to guide the End-to-End generation. Especially, a mechanism is designed to pick the more important retrieved results as guidance explicitly. Meanwhile, if all the retrieved results fail to provide sufficient information, the framework also can let the model regress to a regular query-based NRG automatically. According to the thorough experimental comparisons with other retrieval-guided models, our proposed model can better utilize the useful information of retrieved results to generate appropriate and diverse responses.
ISSN:2329-9290
2329-9304
DOI:10.1109/TASLP.2023.3302231