Loading…

Self-supervised BGP-graph reasoning enhanced complex KBQA via SPARQL generation

Knowledge base question answering aims to answer complex questions from large-scale knowledge bases. Although existing generative language models that translate questions into SPARQL queries have achieved promising results, there are still generation errors due to redundancies or errors in the knowl...

Full description

Saved in:
Bibliographic Details
Published in:Information processing & management 2024-09, Vol.61 (5), p.103802, Article 103802
Main Authors: Gao, Feng, Yang, Yan, Gao, Peng, Gu, Ming, Zhao, Shangqing, Chen, Yuefeng, Yuan, Hao, Lan, Man, Zhou, Aimin, He, Liang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Knowledge base question answering aims to answer complex questions from large-scale knowledge bases. Although existing generative language models that translate questions into SPARQL queries have achieved promising results, there are still generation errors due to redundancies or errors in the knowledge fed to the generative models and difficulties in representing the implicit logic of knowledge as the specific syntax of SPARQL. To address above issues, we propose TrackerQA, a novel self-supervised reasoning framework based on basic graph patterns (BGP) to determine precise paths and enhance SPARQL generation. First, we develop a contrastive learning semantic matching model to reduce the large knowledge searching space. Then, we built a BGP parser that parses the recalled knowledge and constraints into BGP graphs, which can deconstruct complex knowledge into BGP triples and naturally obtain supervision from gold SPARQL. Next, we design a self-supervised BGP graph neural network that encodes knowledge through graph transformation layers with directed message-passing control and employs a question-aware attention mechanism to predict the exact BGP paths. Finally, a SPARQL generator integrates the paths into a pre-trained language model to improve the performance of SPARQL generation. Experiments on the KQA Pro dataset show that our model achieves state-of-the-art answering accuracy scores of 95.32%, being the closest to the human level at 97.5%, and reasons out KB paths with F1 scores of 0.98 for nodes and 0.99 for edges.
ISSN:0306-4573
1873-5371
DOI:10.1016/j.ipm.2024.103802