Loading…

A meta learning approach for open information extraction

As one of the most important research topics in the field of natural language processing, open information extraction has achieved gratifying research findings in recent years. Even if so much effort is put into the work of open information extraction, there are still many shortcomings and great roo...

Full description

Saved in:
Bibliographic Details
Published in:Neural computing & applications 2022-08, Vol.34 (15), p.12681-12694
Main Authors: Han, Jiabao, Wang, Hongzhi
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:As one of the most important research topics in the field of natural language processing, open information extraction has achieved gratifying research findings in recent years. Even if so much effort is put into the work of open information extraction, there are still many shortcomings and great room for improvement in the existing system. The traditional open information extraction task relies heavily on the artificially defined extraction paradigm, and it will produce error accumulation and propagation. The end-to-end model relies on a large number of training data, and it is hard to re-train with the increase of the model. To cope with the difficulty of updating parameters of large neural network models, in this paper, we propose a solution based on the meta-learning framework, we design a neural network-based converter module, which effectively combines the learned model parameters with the new model parameters. Then update the parameters of the original open information extraction model using the parameters calculated by the converter. This can not only avoid the problem of error propagation of traditional models but also effectively deal with the iterative updating of open information extraction models. We employ a large and public Open IE benchmark to demonstrate the performance of our approach. The experimental results show that our model can achieve better performance than existing baselines, and compared with the re-training model, our strategy can not only greatly shorten the update time of the model, but also not lose the performance of the model completely re-trained with all the training data.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-022-07114-7