Loading…
RTJTN: Relational Triplet Joint Tagging Network for Joint Entity and Relation Extraction
Extracting entities and relations from unstructured sentences is one of the most concerned tasks in the field of natural language processing. However, most existing works process entity and relation information in a certain order and suffer from the error iteration. In this paper, we introduce a rel...
Saved in:
Published in: | Computational intelligence and neuroscience 2021, Vol.2021 (1), p.3447473-3447473 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Extracting entities and relations from unstructured sentences is one of the most concerned tasks in the field of natural language processing. However, most existing works process entity and relation information in a certain order and suffer from the error iteration. In this paper, we introduce a relational triplet joint tagging network (RTJTN), which is divided into joint entities and relations tagging layer and relational triplet judgment layer. In the joint tagging layer, instead of extracting entity and relation separately, we propose a tagging method that allows the model to simultaneously extract entities and relations in unstructured sentences to prevent the error iteration; and, in order to solve the relation overlapping problem, we propose a relational triplet judgment network to judge the correct triples among the group of triples with the same relation in a sentence. In the experiment, we evaluate our network on the English public dataset NYT and the Chinese public datasets DuIE 2.0 and CMED. The F1 score of our model is improved by 1.1, 6.0, and 5.1 compared to the best baseline model on NYT, DuIE 2.0, and CMED datasets, respectively. In-depth analysis of the model’s performance on overlapping problems and sentence complexity problems shows that our model has different gains in all cases. |
---|---|
ISSN: | 1687-5265 1687-5273 |
DOI: | 10.1155/2021/3447473 |