Loading…

Deep Understanding Based Multi-Document Machine Reading Comprehension

Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from...

Full description

Saved in:
Bibliographic Details
Published in:ACM transactions on Asian and low-resource language information processing 2022-04, Vol.21 (5), p.1-21, Article 108
Main Authors: Ren, Feiliang, Liu, Yongkang, Li, Bochao, Wang, Zhibo, Guo, Yu, Liu, Shilei, Wu, Huimin, Wang, Jiaqi, Liu, Chunchao, Wang, Bingchao
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3
cites cdi_FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3
container_end_page 21
container_issue 5
container_start_page 1
container_title ACM transactions on Asian and low-resource language information processing
container_volume 21
creator Ren, Feiliang
Liu, Yongkang
Li, Bochao
Wang, Zhibo
Guo, Yu
Liu, Shilei
Wu, Huimin
Wang, Jiaqi
Liu, Chunchao
Wang, Bingchao
description Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from the perspective of each other. Second, to understand the supporting cues for a correct answer from the perspective of intra-document and inter-documents. Ignoring these two kinds of important understandings would make the models overlook some important information that may be helpful for finding correct answers. To overcome this deficiency, we propose a deep understanding based model for multi-document machine reading comprehension. It has three cascaded deep understanding modules which are designed to understand the accurate semantic meaning of words, the interactions between the input question and documents, and the supporting cues for the correct answer. We evaluate our model on two large scale benchmark datasets, namely TriviaQA Web and DuReader. Extensive experiments show that our model achieves state-of-the-art results on both datasets.
doi_str_mv 10.1145/3519296
format article
fullrecord <record><control><sourceid>acm_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1145_3519296</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3519296</sourcerecordid><originalsourceid>FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3</originalsourceid><addsrcrecordid>eNo9kD1PwzAYhC0EElWp2JmyMQX87XiEtHxIrZAQnSPHfk2NGiey04F_T6Et053uHt1wCF0TfEcIF_dMEE21PEMTypQoucL0_OSl1pdolvMXxphwJSUmE7SYAwzFOjpIeTTRhfhZPJoMrljttmMo573ddRDHYmXsJkQo3sH8QXXfDQk2EHPo4xW68GabYXbUKVo_LT7ql3L59vxaPyxLQzkfSy8l18p7amQltNiHrQYiiALhwQGvLCMgqGsdpZX0luGWMekqDJgpZj2botvDrk19zgl8M6TQmfTdENz8HtAcD9iTNwfS2O4fOpU_bqpUuw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Understanding Based Multi-Document Machine Reading Comprehension</title><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><creator>Ren, Feiliang ; Liu, Yongkang ; Li, Bochao ; Wang, Zhibo ; Guo, Yu ; Liu, Shilei ; Wu, Huimin ; Wang, Jiaqi ; Liu, Chunchao ; Wang, Bingchao</creator><creatorcontrib>Ren, Feiliang ; Liu, Yongkang ; Li, Bochao ; Wang, Zhibo ; Guo, Yu ; Liu, Shilei ; Wu, Huimin ; Wang, Jiaqi ; Liu, Chunchao ; Wang, Bingchao</creatorcontrib><description>Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from the perspective of each other. Second, to understand the supporting cues for a correct answer from the perspective of intra-document and inter-documents. Ignoring these two kinds of important understandings would make the models overlook some important information that may be helpful for finding correct answers. To overcome this deficiency, we propose a deep understanding based model for multi-document machine reading comprehension. It has three cascaded deep understanding modules which are designed to understand the accurate semantic meaning of words, the interactions between the input question and documents, and the supporting cues for the correct answer. We evaluate our model on two large scale benchmark datasets, namely TriviaQA Web and DuReader. Extensive experiments show that our model achieves state-of-the-art results on both datasets.</description><identifier>ISSN: 2375-4699</identifier><identifier>EISSN: 2375-4702</identifier><identifier>DOI: 10.1145/3519296</identifier><language>eng</language><publisher>New York, NY: ACM</publisher><subject>Information systems ; Question answering</subject><ispartof>ACM transactions on Asian and low-resource language information processing, 2022-04, Vol.21 (5), p.1-21, Article 108</ispartof><rights>Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3</citedby><cites>FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3</cites><orcidid>0000-0001-6824-1191 ; 0000-0003-2976-6256</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Ren, Feiliang</creatorcontrib><creatorcontrib>Liu, Yongkang</creatorcontrib><creatorcontrib>Li, Bochao</creatorcontrib><creatorcontrib>Wang, Zhibo</creatorcontrib><creatorcontrib>Guo, Yu</creatorcontrib><creatorcontrib>Liu, Shilei</creatorcontrib><creatorcontrib>Wu, Huimin</creatorcontrib><creatorcontrib>Wang, Jiaqi</creatorcontrib><creatorcontrib>Liu, Chunchao</creatorcontrib><creatorcontrib>Wang, Bingchao</creatorcontrib><title>Deep Understanding Based Multi-Document Machine Reading Comprehension</title><title>ACM transactions on Asian and low-resource language information processing</title><addtitle>ACM TALLIP</addtitle><description>Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from the perspective of each other. Second, to understand the supporting cues for a correct answer from the perspective of intra-document and inter-documents. Ignoring these two kinds of important understandings would make the models overlook some important information that may be helpful for finding correct answers. To overcome this deficiency, we propose a deep understanding based model for multi-document machine reading comprehension. It has three cascaded deep understanding modules which are designed to understand the accurate semantic meaning of words, the interactions between the input question and documents, and the supporting cues for the correct answer. We evaluate our model on two large scale benchmark datasets, namely TriviaQA Web and DuReader. Extensive experiments show that our model achieves state-of-the-art results on both datasets.</description><subject>Information systems</subject><subject>Question answering</subject><issn>2375-4699</issn><issn>2375-4702</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNo9kD1PwzAYhC0EElWp2JmyMQX87XiEtHxIrZAQnSPHfk2NGiey04F_T6Et053uHt1wCF0TfEcIF_dMEE21PEMTypQoucL0_OSl1pdolvMXxphwJSUmE7SYAwzFOjpIeTTRhfhZPJoMrljttmMo573ddRDHYmXsJkQo3sH8QXXfDQk2EHPo4xW68GabYXbUKVo_LT7ql3L59vxaPyxLQzkfSy8l18p7amQltNiHrQYiiALhwQGvLCMgqGsdpZX0luGWMekqDJgpZj2botvDrk19zgl8M6TQmfTdENz8HtAcD9iTNwfS2O4fOpU_bqpUuw</recordid><startdate>20220429</startdate><enddate>20220429</enddate><creator>Ren, Feiliang</creator><creator>Liu, Yongkang</creator><creator>Li, Bochao</creator><creator>Wang, Zhibo</creator><creator>Guo, Yu</creator><creator>Liu, Shilei</creator><creator>Wu, Huimin</creator><creator>Wang, Jiaqi</creator><creator>Liu, Chunchao</creator><creator>Wang, Bingchao</creator><general>ACM</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-6824-1191</orcidid><orcidid>https://orcid.org/0000-0003-2976-6256</orcidid></search><sort><creationdate>20220429</creationdate><title>Deep Understanding Based Multi-Document Machine Reading Comprehension</title><author>Ren, Feiliang ; Liu, Yongkang ; Li, Bochao ; Wang, Zhibo ; Guo, Yu ; Liu, Shilei ; Wu, Huimin ; Wang, Jiaqi ; Liu, Chunchao ; Wang, Bingchao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Information systems</topic><topic>Question answering</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ren, Feiliang</creatorcontrib><creatorcontrib>Liu, Yongkang</creatorcontrib><creatorcontrib>Li, Bochao</creatorcontrib><creatorcontrib>Wang, Zhibo</creatorcontrib><creatorcontrib>Guo, Yu</creatorcontrib><creatorcontrib>Liu, Shilei</creatorcontrib><creatorcontrib>Wu, Huimin</creatorcontrib><creatorcontrib>Wang, Jiaqi</creatorcontrib><creatorcontrib>Liu, Chunchao</creatorcontrib><creatorcontrib>Wang, Bingchao</creatorcontrib><collection>CrossRef</collection><jtitle>ACM transactions on Asian and low-resource language information processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ren, Feiliang</au><au>Liu, Yongkang</au><au>Li, Bochao</au><au>Wang, Zhibo</au><au>Guo, Yu</au><au>Liu, Shilei</au><au>Wu, Huimin</au><au>Wang, Jiaqi</au><au>Liu, Chunchao</au><au>Wang, Bingchao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Understanding Based Multi-Document Machine Reading Comprehension</atitle><jtitle>ACM transactions on Asian and low-resource language information processing</jtitle><stitle>ACM TALLIP</stitle><date>2022-04-29</date><risdate>2022</risdate><volume>21</volume><issue>5</issue><spage>1</spage><epage>21</epage><pages>1-21</pages><artnum>108</artnum><issn>2375-4699</issn><eissn>2375-4702</eissn><abstract>Most existing multi-document machine reading comprehension models mainly focus on understanding the interactions between the input question and documents, but ignore the following two kinds of understandings. First, to understand the semantic meaning of words in the input question and documents from the perspective of each other. Second, to understand the supporting cues for a correct answer from the perspective of intra-document and inter-documents. Ignoring these two kinds of important understandings would make the models overlook some important information that may be helpful for finding correct answers. To overcome this deficiency, we propose a deep understanding based model for multi-document machine reading comprehension. It has three cascaded deep understanding modules which are designed to understand the accurate semantic meaning of words, the interactions between the input question and documents, and the supporting cues for the correct answer. We evaluate our model on two large scale benchmark datasets, namely TriviaQA Web and DuReader. Extensive experiments show that our model achieves state-of-the-art results on both datasets.</abstract><cop>New York, NY</cop><pub>ACM</pub><doi>10.1145/3519296</doi><tpages>21</tpages><orcidid>https://orcid.org/0000-0001-6824-1191</orcidid><orcidid>https://orcid.org/0000-0003-2976-6256</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 2375-4699
ispartof ACM transactions on Asian and low-resource language information processing, 2022-04, Vol.21 (5), p.1-21, Article 108
issn 2375-4699
2375-4702
language eng
recordid cdi_crossref_primary_10_1145_3519296
source Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)
subjects Information systems
Question answering
title Deep Understanding Based Multi-Document Machine Reading Comprehension
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T16%3A23%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-acm_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Understanding%20Based%20Multi-Document%20Machine%20Reading%20Comprehension&rft.jtitle=ACM%20transactions%20on%20Asian%20and%20low-resource%20language%20information%20processing&rft.au=Ren,%20Feiliang&rft.date=2022-04-29&rft.volume=21&rft.issue=5&rft.spage=1&rft.epage=21&rft.pages=1-21&rft.artnum=108&rft.issn=2375-4699&rft.eissn=2375-4702&rft_id=info:doi/10.1145/3519296&rft_dat=%3Cacm_cross%3E3519296%3C/acm_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a244t-f66497ff2a68595a24b9e1517e5fede48c31e52dbd2286fc30b336d80e0373cf3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true