Loading…

Parallel Fusion of Graph and Text with Semantic Enhancement for Commonsense Question Answering

Commonsense question answering (CSQA) is a challenging task in the field of knowledge graph question answering. It combines the context of the question with the relevant knowledge in the knowledge graph to reason and give an answer to the question. Existing CSQA models combine pretrained language mo...

Full description

Saved in:
Bibliographic Details
Published in:Electronics (Basel) 2024-12, Vol.13 (23), p.4618
Main Authors: Zong, Jiachuang, Li, Zhao, Chen, Tong, Zhang, Liguo, Zhan, Yiming
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue 23
container_start_page 4618
container_title Electronics (Basel)
container_volume 13
creator Zong, Jiachuang
Li, Zhao
Chen, Tong
Zhang, Liguo
Zhan, Yiming
description Commonsense question answering (CSQA) is a challenging task in the field of knowledge graph question answering. It combines the context of the question with the relevant knowledge in the knowledge graph to reason and give an answer to the question. Existing CSQA models combine pretrained language models and graph neural networks to process question context and knowledge graph information, respectively, and obtain each other’s information during the reasoning process to improve the accuracy of reasoning. However, the existing models do not fully utilize the textual representation and graph representation after reasoning to reason about the answer, and they do not give enough semantic representation to the edges during the reasoning process of the knowledge graph. Therefore, we propose a novel parallel fusion framework for text and knowledge graphs, using the fused global graph information to enhance the semantic information of reasoning answers. In addition, we enhance the relationship embedding by enriching the initial semantics and adjusting the initial weight distribution, thereby improving the reasoning ability of the graph neural network. We conducted experiments on two public datasets, CommonsenseQA and OpenBookQA, and found that our model is competitive when compared with other baseline models. Additionally, we validated the generalizability of our model on the MedQA-USMLE dataset.
doi_str_mv 10.3390/electronics13234618
format article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_3144079447</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A819847585</galeid><sourcerecordid>A819847585</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1117-e82fc1621f5cc27f2279b45385f6512289a315a1467f0042a015f44ae888f79f3</originalsourceid><addsrcrecordid>eNptUE1LAzEQDaJgqf0FXgKet-ZzNzmW0lahoGK9usR00qbsJjXZUv33bqkHD84MzDDMezPzELqlZMy5JvfQgO1SDN5myhkXJVUXaMBIpQvNNLv8U1-jUc470pumXHEyQO_PJpmmgQbPD9nHgKPDi2T2W2zCGq_gq8NH323xK7QmdN7iWdiaYKGF0GEXE57Gto0hQx_45QC5O5FMQj5C8mFzg66caTKMfvMQvc1nq-lDsXxaPE4ny8JSSqsCFHOWlow6aS2rHGOV_hCSK-lKSRlT2nAqDRVl5QgRzBAqnRAGlFKu0o4P0d2Zd5_i5-mKehcPKfQra06F6P8XouqnxuepjWmg9sHFLhnb-xpab2MA5_v-RFGtRCWV7AH8DLAp5pzA1fvkW5O-a0rqk_j1P-LzH8n9eY4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3144079447</pqid></control><display><type>article</type><title>Parallel Fusion of Graph and Text with Semantic Enhancement for Commonsense Question Answering</title><source>Publicly Available Content Database</source><creator>Zong, Jiachuang ; Li, Zhao ; Chen, Tong ; Zhang, Liguo ; Zhan, Yiming</creator><creatorcontrib>Zong, Jiachuang ; Li, Zhao ; Chen, Tong ; Zhang, Liguo ; Zhan, Yiming</creatorcontrib><description>Commonsense question answering (CSQA) is a challenging task in the field of knowledge graph question answering. It combines the context of the question with the relevant knowledge in the knowledge graph to reason and give an answer to the question. Existing CSQA models combine pretrained language models and graph neural networks to process question context and knowledge graph information, respectively, and obtain each other’s information during the reasoning process to improve the accuracy of reasoning. However, the existing models do not fully utilize the textual representation and graph representation after reasoning to reason about the answer, and they do not give enough semantic representation to the edges during the reasoning process of the knowledge graph. Therefore, we propose a novel parallel fusion framework for text and knowledge graphs, using the fused global graph information to enhance the semantic information of reasoning answers. In addition, we enhance the relationship embedding by enriching the initial semantics and adjusting the initial weight distribution, thereby improving the reasoning ability of the graph neural network. We conducted experiments on two public datasets, CommonsenseQA and OpenBookQA, and found that our model is competitive when compared with other baseline models. Additionally, we validated the generalizability of our model on the MedQA-USMLE dataset.</description><identifier>ISSN: 2079-9292</identifier><identifier>EISSN: 2079-9292</identifier><identifier>DOI: 10.3390/electronics13234618</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Analysis ; Cognition &amp; reasoning ; Context ; Datasets ; Graph neural networks ; Graph representations ; Graph theory ; Graphical representations ; Knowledge representation ; Language ; Neural networks ; Questions ; Reasoning ; Semantics</subject><ispartof>Electronics (Basel), 2024-12, Vol.13 (23), p.4618</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0009-0002-5943-2871</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3144079447/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3144079447?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Zong, Jiachuang</creatorcontrib><creatorcontrib>Li, Zhao</creatorcontrib><creatorcontrib>Chen, Tong</creatorcontrib><creatorcontrib>Zhang, Liguo</creatorcontrib><creatorcontrib>Zhan, Yiming</creatorcontrib><title>Parallel Fusion of Graph and Text with Semantic Enhancement for Commonsense Question Answering</title><title>Electronics (Basel)</title><description>Commonsense question answering (CSQA) is a challenging task in the field of knowledge graph question answering. It combines the context of the question with the relevant knowledge in the knowledge graph to reason and give an answer to the question. Existing CSQA models combine pretrained language models and graph neural networks to process question context and knowledge graph information, respectively, and obtain each other’s information during the reasoning process to improve the accuracy of reasoning. However, the existing models do not fully utilize the textual representation and graph representation after reasoning to reason about the answer, and they do not give enough semantic representation to the edges during the reasoning process of the knowledge graph. Therefore, we propose a novel parallel fusion framework for text and knowledge graphs, using the fused global graph information to enhance the semantic information of reasoning answers. In addition, we enhance the relationship embedding by enriching the initial semantics and adjusting the initial weight distribution, thereby improving the reasoning ability of the graph neural network. We conducted experiments on two public datasets, CommonsenseQA and OpenBookQA, and found that our model is competitive when compared with other baseline models. Additionally, we validated the generalizability of our model on the MedQA-USMLE dataset.</description><subject>Analysis</subject><subject>Cognition &amp; reasoning</subject><subject>Context</subject><subject>Datasets</subject><subject>Graph neural networks</subject><subject>Graph representations</subject><subject>Graph theory</subject><subject>Graphical representations</subject><subject>Knowledge representation</subject><subject>Language</subject><subject>Neural networks</subject><subject>Questions</subject><subject>Reasoning</subject><subject>Semantics</subject><issn>2079-9292</issn><issn>2079-9292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNptUE1LAzEQDaJgqf0FXgKet-ZzNzmW0lahoGK9usR00qbsJjXZUv33bqkHD84MzDDMezPzELqlZMy5JvfQgO1SDN5myhkXJVUXaMBIpQvNNLv8U1-jUc470pumXHEyQO_PJpmmgQbPD9nHgKPDi2T2W2zCGq_gq8NH323xK7QmdN7iWdiaYKGF0GEXE57Gto0hQx_45QC5O5FMQj5C8mFzg66caTKMfvMQvc1nq-lDsXxaPE4ny8JSSqsCFHOWlow6aS2rHGOV_hCSK-lKSRlT2nAqDRVl5QgRzBAqnRAGlFKu0o4P0d2Zd5_i5-mKehcPKfQra06F6P8XouqnxuepjWmg9sHFLhnb-xpab2MA5_v-RFGtRCWV7AH8DLAp5pzA1fvkW5O-a0rqk_j1P-LzH8n9eY4</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Zong, Jiachuang</creator><creator>Li, Zhao</creator><creator>Chen, Tong</creator><creator>Zhang, Liguo</creator><creator>Zhan, Yiming</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L7M</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0009-0002-5943-2871</orcidid></search><sort><creationdate>20241201</creationdate><title>Parallel Fusion of Graph and Text with Semantic Enhancement for Commonsense Question Answering</title><author>Zong, Jiachuang ; Li, Zhao ; Chen, Tong ; Zhang, Liguo ; Zhan, Yiming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1117-e82fc1621f5cc27f2279b45385f6512289a315a1467f0042a015f44ae888f79f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Analysis</topic><topic>Cognition &amp; reasoning</topic><topic>Context</topic><topic>Datasets</topic><topic>Graph neural networks</topic><topic>Graph representations</topic><topic>Graph theory</topic><topic>Graphical representations</topic><topic>Knowledge representation</topic><topic>Language</topic><topic>Neural networks</topic><topic>Questions</topic><topic>Reasoning</topic><topic>Semantics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zong, Jiachuang</creatorcontrib><creatorcontrib>Li, Zhao</creatorcontrib><creatorcontrib>Chen, Tong</creatorcontrib><creatorcontrib>Zhang, Liguo</creatorcontrib><creatorcontrib>Zhan, Yiming</creatorcontrib><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Electronics (Basel)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zong, Jiachuang</au><au>Li, Zhao</au><au>Chen, Tong</au><au>Zhang, Liguo</au><au>Zhan, Yiming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Parallel Fusion of Graph and Text with Semantic Enhancement for Commonsense Question Answering</atitle><jtitle>Electronics (Basel)</jtitle><date>2024-12-01</date><risdate>2024</risdate><volume>13</volume><issue>23</issue><spage>4618</spage><pages>4618-</pages><issn>2079-9292</issn><eissn>2079-9292</eissn><abstract>Commonsense question answering (CSQA) is a challenging task in the field of knowledge graph question answering. It combines the context of the question with the relevant knowledge in the knowledge graph to reason and give an answer to the question. Existing CSQA models combine pretrained language models and graph neural networks to process question context and knowledge graph information, respectively, and obtain each other’s information during the reasoning process to improve the accuracy of reasoning. However, the existing models do not fully utilize the textual representation and graph representation after reasoning to reason about the answer, and they do not give enough semantic representation to the edges during the reasoning process of the knowledge graph. Therefore, we propose a novel parallel fusion framework for text and knowledge graphs, using the fused global graph information to enhance the semantic information of reasoning answers. In addition, we enhance the relationship embedding by enriching the initial semantics and adjusting the initial weight distribution, thereby improving the reasoning ability of the graph neural network. We conducted experiments on two public datasets, CommonsenseQA and OpenBookQA, and found that our model is competitive when compared with other baseline models. Additionally, we validated the generalizability of our model on the MedQA-USMLE dataset.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/electronics13234618</doi><orcidid>https://orcid.org/0009-0002-5943-2871</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2079-9292
ispartof Electronics (Basel), 2024-12, Vol.13 (23), p.4618
issn 2079-9292
2079-9292
language eng
recordid cdi_proquest_journals_3144079447
source Publicly Available Content Database
subjects Analysis
Cognition & reasoning
Context
Datasets
Graph neural networks
Graph representations
Graph theory
Graphical representations
Knowledge representation
Language
Neural networks
Questions
Reasoning
Semantics
title Parallel Fusion of Graph and Text with Semantic Enhancement for Commonsense Question Answering
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T15%3A12%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Parallel%20Fusion%20of%20Graph%20and%20Text%20with%20Semantic%20Enhancement%20for%20Commonsense%20Question%20Answering&rft.jtitle=Electronics%20(Basel)&rft.au=Zong,%20Jiachuang&rft.date=2024-12-01&rft.volume=13&rft.issue=23&rft.spage=4618&rft.pages=4618-&rft.issn=2079-9292&rft.eissn=2079-9292&rft_id=info:doi/10.3390/electronics13234618&rft_dat=%3Cgale_proqu%3EA819847585%3C/gale_proqu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c1117-e82fc1621f5cc27f2279b45385f6512289a315a1467f0042a015f44ae888f79f3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3144079447&rft_id=info:pmid/&rft_galeid=A819847585&rfr_iscdi=true