Loading…

Deep Understanding Based Multi-Document Machine Reading Comprehension

Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on Asian and low-resource language information processing 2022-04, Vol.21 (5), p.1-21, Article 108
Main Authors: Ren, Feiliang, Liu, Yongkang, Li, Bochao, Wang, Zhibo, Guo, Yu, Liu, Shilei, Wu, Huimin, Wang, Jiaqi, Liu, Chunchao, Wang, Bingchao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from the perspective of each other. Second, to understand the supporting cues for a correct answer from the perspective of intra-document and inter-documents. Ignoring these two kinds of important understandings would make the models overlook some important information that may be helpful for finding correct answers. To overcome this deficiency, we propose a deep understanding based model for multi-document machine reading comprehension. It has three cascaded deep understanding modules which are designed to understand the accurate semantic meaning of words, the interactions between the input question and documents, and the supporting cues for the correct answer. We evaluate our model on two large scale benchmark datasets, namely TriviaQA Web and DuReader. Extensive experiments show that our model achieves state-of-the-art results on both datasets.
ISSN:2375-4699
2375-4702
DOI:10.1145/3519296