Loading…

Boosting the Efficiency of First-Order Abductive Reasoning Using Pre-estimated Relatedness between Predicates

Abduction is inference to the best explanation. While abduction has long been considered a promising framework for natural language processing (NLP), its computational complexity hinders its application to practical NLP problems. In this paper, we propose a method to predetermine the semantic relate...

Full description

Saved in:
Bibliographic Details
Published in:International journal of machine learning and computing 2015-04, Vol.5 (2), p.114-120
Main Authors: Yamamoto, Kazeto, Inoue, Naoya, Inui, Kentaro, Arase, Yuki, Tsujii, Jun'ichi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abduction is inference to the best explanation. While abduction has long been considered a promising framework for natural language processing (NLP), its computational complexity hinders its application to practical NLP problems. In this paper, we propose a method to predetermine the semantic relatedness between predicates and to use that information to boost the efficiency of first-order abductive reasoning. The proposed method uses the estimated semantic relatedness as follows: (i) to block inferences leading to explanations that are semantically irrelevant to the observations, and (ii) to cluster semantically relevant observations in order to split the task of abduction into a set of non-interdependent subproblems that can be solved in parallel. Our experiment with a large-scale knowledge base for a real-life NLP task reveals that the proposed method drastically reduces the size of the search space and significantly improves the computational efficiency of first-order abductive reasoning compared with the state-of-the-art system.
ISSN:2010-3700
2010-3700
DOI:10.7763/IJMLC.2015.V5.493