Loading…

Research on Knowledge Graph Completion Model Combining Temporal Convolutional Network and Monte Carlo Tree Search

In knowledge graph completion (KGC) and other applications, learning how to move from a source node to a target node with a given query is an important problem. It can be formulated as a reinforcement learning (RL) problem transition model under a given state. In order to overcome the challenges of...

Full description

Saved in:
Bibliographic Details
Published in:Mathematical problems in engineering 2022-03, Vol.2022, p.1-13
Main Authors: Wang, Ying, Sun, Mingchen, Wang, Hongji, Sun, Yudong
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353
cites cdi_FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353
container_end_page 13
container_issue
container_start_page 1
container_title Mathematical problems in engineering
container_volume 2022
creator Wang, Ying
Sun, Mingchen
Wang, Hongji
Sun, Yudong
description In knowledge graph completion (KGC) and other applications, learning how to move from a source node to a target node with a given query is an important problem. It can be formulated as a reinforcement learning (RL) problem transition model under a given state. In order to overcome the challenges of sparse rewards and historical state encoding, we develop a deep agent network (graph-agent, GA), which combines temporal convolutional network (TCN) and Monte Carlo Tree Search (MCTS). Firstly, we combine MCTS with neural network to generate more positive reward trajectories, which can effectively solve the problem of sparse rewards. TCN is used to encode the history state, which is used for policy and Q-value respectively. Secondly, according to these trajectories, we use Q-Learning to improve the network and parameter sharing to enhance TCN strategy. We apply these steps repeatedly to learn the model. Thirdly, in the prediction stage of the model, Monte Carlo Tree Search combined with Q-value method is used to predict the target nodes. The experimental results on several graph-walking benchmarks show that GA is better than other RL methods based on-policy gradient. The performance of GA is also better than the traditional KGC baselines.
doi_str_mv 10.1155/2022/2290540
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2643814961</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2643814961</sourcerecordid><originalsourceid>FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353</originalsourceid><addsrcrecordid>eNp9kMtOwzAQRS0EEqWw4wMssYRQ27GdZIkiKIgCEhSJXeTEkzYltYOdUvH3OLRrVvM6c2d0ETqn5JpSISaMMDZhLCOCkwM0okLGkaA8OQw5YTyiLP44RiferwhhVNB0hL5ewYNy1RJbgx-N3bagF4CnTnVLnNt110LfhNGT1dAOjbIxjVngOaw769TQMt-23QxQqJ6h31r3iZXRYcX0gHPlWovnDgC__R06RUe1aj2c7eMYvd_dzvP7aPYyfchvZlHFZNJHNKVckozIRCnKNFQ8AQJalJXgSZIBL-O6kjJmRNJS6VTVHOosq3VaE13GIh6ji51u5-zXBnxfrOzGhSd9wSSPg3wmaaCudlTlrPcO6qJzzVq5n4KSYjC1GEwt9qYG_HKHLxuj1bb5n_4F5jB3Kg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2643814961</pqid></control><display><type>article</type><title>Research on Knowledge Graph Completion Model Combining Temporal Convolutional Network and Monte Carlo Tree Search</title><source>Publicly Available Content Database</source><source>Wiley_OA刊</source><creator>Wang, Ying ; Sun, Mingchen ; Wang, Hongji ; Sun, Yudong</creator><contributor>D'Aniello, Giuseppe ; Giuseppe D'Aniello</contributor><creatorcontrib>Wang, Ying ; Sun, Mingchen ; Wang, Hongji ; Sun, Yudong ; D'Aniello, Giuseppe ; Giuseppe D'Aniello</creatorcontrib><description>In knowledge graph completion (KGC) and other applications, learning how to move from a source node to a target node with a given query is an important problem. It can be formulated as a reinforcement learning (RL) problem transition model under a given state. In order to overcome the challenges of sparse rewards and historical state encoding, we develop a deep agent network (graph-agent, GA), which combines temporal convolutional network (TCN) and Monte Carlo Tree Search (MCTS). Firstly, we combine MCTS with neural network to generate more positive reward trajectories, which can effectively solve the problem of sparse rewards. TCN is used to encode the history state, which is used for policy and Q-value respectively. Secondly, according to these trajectories, we use Q-Learning to improve the network and parameter sharing to enhance TCN strategy. We apply these steps repeatedly to learn the model. Thirdly, in the prediction stage of the model, Monte Carlo Tree Search combined with Q-value method is used to predict the target nodes. The experimental results on several graph-walking benchmarks show that GA is better than other RL methods based on-policy gradient. The performance of GA is also better than the traditional KGC baselines.</description><identifier>ISSN: 1024-123X</identifier><identifier>EISSN: 1563-5147</identifier><identifier>DOI: 10.1155/2022/2290540</identifier><language>eng</language><publisher>New York: Hindawi</publisher><subject>Algorithms ; Decision making ; Deep learning ; History ; Knowledge ; Knowledge representation ; Machine learning ; Markov analysis ; Monte Carlo simulation ; Neural networks ; Q values ; Searching ; Teaching methods</subject><ispartof>Mathematical problems in engineering, 2022-03, Vol.2022, p.1-13</ispartof><rights>Copyright © 2022 Ying Wang et al.</rights><rights>Copyright © 2022 Ying Wang et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353</citedby><cites>FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353</cites><orcidid>0000-0002-4365-4296 ; 0000-0002-7199-0836 ; 0000-0002-3288-5195 ; 0000-0002-8834-1592</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2643814961/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2643814961?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><contributor>D'Aniello, Giuseppe</contributor><contributor>Giuseppe D'Aniello</contributor><creatorcontrib>Wang, Ying</creatorcontrib><creatorcontrib>Sun, Mingchen</creatorcontrib><creatorcontrib>Wang, Hongji</creatorcontrib><creatorcontrib>Sun, Yudong</creatorcontrib><title>Research on Knowledge Graph Completion Model Combining Temporal Convolutional Network and Monte Carlo Tree Search</title><title>Mathematical problems in engineering</title><description>In knowledge graph completion (KGC) and other applications, learning how to move from a source node to a target node with a given query is an important problem. It can be formulated as a reinforcement learning (RL) problem transition model under a given state. In order to overcome the challenges of sparse rewards and historical state encoding, we develop a deep agent network (graph-agent, GA), which combines temporal convolutional network (TCN) and Monte Carlo Tree Search (MCTS). Firstly, we combine MCTS with neural network to generate more positive reward trajectories, which can effectively solve the problem of sparse rewards. TCN is used to encode the history state, which is used for policy and Q-value respectively. Secondly, according to these trajectories, we use Q-Learning to improve the network and parameter sharing to enhance TCN strategy. We apply these steps repeatedly to learn the model. Thirdly, in the prediction stage of the model, Monte Carlo Tree Search combined with Q-value method is used to predict the target nodes. The experimental results on several graph-walking benchmarks show that GA is better than other RL methods based on-policy gradient. The performance of GA is also better than the traditional KGC baselines.</description><subject>Algorithms</subject><subject>Decision making</subject><subject>Deep learning</subject><subject>History</subject><subject>Knowledge</subject><subject>Knowledge representation</subject><subject>Machine learning</subject><subject>Markov analysis</subject><subject>Monte Carlo simulation</subject><subject>Neural networks</subject><subject>Q values</subject><subject>Searching</subject><subject>Teaching methods</subject><issn>1024-123X</issn><issn>1563-5147</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNp9kMtOwzAQRS0EEqWw4wMssYRQ27GdZIkiKIgCEhSJXeTEkzYltYOdUvH3OLRrVvM6c2d0ETqn5JpSISaMMDZhLCOCkwM0okLGkaA8OQw5YTyiLP44RiferwhhVNB0hL5ewYNy1RJbgx-N3bagF4CnTnVLnNt110LfhNGT1dAOjbIxjVngOaw769TQMt-23QxQqJ6h31r3iZXRYcX0gHPlWovnDgC__R06RUe1aj2c7eMYvd_dzvP7aPYyfchvZlHFZNJHNKVckozIRCnKNFQ8AQJalJXgSZIBL-O6kjJmRNJS6VTVHOosq3VaE13GIh6ji51u5-zXBnxfrOzGhSd9wSSPg3wmaaCudlTlrPcO6qJzzVq5n4KSYjC1GEwt9qYG_HKHLxuj1bb5n_4F5jB3Kg</recordid><startdate>20220316</startdate><enddate>20220316</enddate><creator>Wang, Ying</creator><creator>Sun, Mingchen</creator><creator>Wang, Hongji</creator><creator>Sun, Yudong</creator><general>Hindawi</general><general>Hindawi Limited</general><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7TB</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>DWQXO</scope><scope>FR3</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>KR7</scope><scope>L6V</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><orcidid>https://orcid.org/0000-0002-4365-4296</orcidid><orcidid>https://orcid.org/0000-0002-7199-0836</orcidid><orcidid>https://orcid.org/0000-0002-3288-5195</orcidid><orcidid>https://orcid.org/0000-0002-8834-1592</orcidid></search><sort><creationdate>20220316</creationdate><title>Research on Knowledge Graph Completion Model Combining Temporal Convolutional Network and Monte Carlo Tree Search</title><author>Wang, Ying ; Sun, Mingchen ; Wang, Hongji ; Sun, Yudong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Decision making</topic><topic>Deep learning</topic><topic>History</topic><topic>Knowledge</topic><topic>Knowledge representation</topic><topic>Machine learning</topic><topic>Markov analysis</topic><topic>Monte Carlo simulation</topic><topic>Neural networks</topic><topic>Q values</topic><topic>Searching</topic><topic>Teaching methods</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Ying</creatorcontrib><creatorcontrib>Sun, Mingchen</creatorcontrib><creatorcontrib>Wang, Hongji</creatorcontrib><creatorcontrib>Sun, Yudong</creatorcontrib><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access</collection><collection>CrossRef</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East &amp; Africa Database</collection><collection>ProQuest Central</collection><collection>Engineering Research Database</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>Mathematical problems in engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Ying</au><au>Sun, Mingchen</au><au>Wang, Hongji</au><au>Sun, Yudong</au><au>D'Aniello, Giuseppe</au><au>Giuseppe D'Aniello</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Research on Knowledge Graph Completion Model Combining Temporal Convolutional Network and Monte Carlo Tree Search</atitle><jtitle>Mathematical problems in engineering</jtitle><date>2022-03-16</date><risdate>2022</risdate><volume>2022</volume><spage>1</spage><epage>13</epage><pages>1-13</pages><issn>1024-123X</issn><eissn>1563-5147</eissn><abstract>In knowledge graph completion (KGC) and other applications, learning how to move from a source node to a target node with a given query is an important problem. It can be formulated as a reinforcement learning (RL) problem transition model under a given state. In order to overcome the challenges of sparse rewards and historical state encoding, we develop a deep agent network (graph-agent, GA), which combines temporal convolutional network (TCN) and Monte Carlo Tree Search (MCTS). Firstly, we combine MCTS with neural network to generate more positive reward trajectories, which can effectively solve the problem of sparse rewards. TCN is used to encode the history state, which is used for policy and Q-value respectively. Secondly, according to these trajectories, we use Q-Learning to improve the network and parameter sharing to enhance TCN strategy. We apply these steps repeatedly to learn the model. Thirdly, in the prediction stage of the model, Monte Carlo Tree Search combined with Q-value method is used to predict the target nodes. The experimental results on several graph-walking benchmarks show that GA is better than other RL methods based on-policy gradient. The performance of GA is also better than the traditional KGC baselines.</abstract><cop>New York</cop><pub>Hindawi</pub><doi>10.1155/2022/2290540</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0002-4365-4296</orcidid><orcidid>https://orcid.org/0000-0002-7199-0836</orcidid><orcidid>https://orcid.org/0000-0002-3288-5195</orcidid><orcidid>https://orcid.org/0000-0002-8834-1592</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1024-123X
ispartof Mathematical problems in engineering, 2022-03, Vol.2022, p.1-13
issn 1024-123X
1563-5147
language eng
recordid cdi_proquest_journals_2643814961
source Publicly Available Content Database; Wiley_OA刊
subjects Algorithms
Decision making
Deep learning
History
Knowledge
Knowledge representation
Machine learning
Markov analysis
Monte Carlo simulation
Neural networks
Q values
Searching
Teaching methods
title Research on Knowledge Graph Completion Model Combining Temporal Convolutional Network and Monte Carlo Tree Search
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T07%3A17%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Research%20on%20Knowledge%20Graph%20Completion%20Model%20Combining%20Temporal%20Convolutional%20Network%20and%20Monte%20Carlo%20Tree%20Search&rft.jtitle=Mathematical%20problems%20in%20engineering&rft.au=Wang,%20Ying&rft.date=2022-03-16&rft.volume=2022&rft.spage=1&rft.epage=13&rft.pages=1-13&rft.issn=1024-123X&rft.eissn=1563-5147&rft_id=info:doi/10.1155/2022/2290540&rft_dat=%3Cproquest_cross%3E2643814961%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c267t-1814609067aa12dec47e0ed5bc54779e4b3fc6632061bad8af4ef99fd8f0db353%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2643814961&rft_id=info:pmid/&rfr_iscdi=true