Loading…

Multi-task Joint Learning to Enhance Named Entity Recognition

Named Entity Recognition (NER) models have achieved good performance in recent years but also have some shortcomings. Existing models regard NER as a sequence labeling task for label prediction, without considering the impact of different stages in the entity recognition process on the final result....

Full description

Saved in:
Bibliographic Details
Published in:Journal of physics. Conference series 2023-02, Vol.2428 (1), p.12037
Main Authors: Shen, Xiajiong, Hu, Xiaojie, Liu, Ning, Shen, Yatian
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Named Entity Recognition (NER) models have achieved good performance in recent years but also have some shortcomings. Existing models regard NER as a sequence labeling task for label prediction, without considering the impact of different stages in the entity recognition process on the final result. NER can be viewed as two separate subtasks: boundary detection task and type prediction task. The two subtasks can transmit information and cooperate in the process of entity recognition, so the synergy of the two subtasks is beneficial to NER. In this paper, we propose a method to split the NER task into multiple subtasks and use the information of each subtask to enhance the NER task. According to the characteristics of different subtasks, we use different feature extraction method models to extract structural information useful for this task effectively. Using the information extracted from each subtask or the final results of the subtasks, the performance of NER was enhanced through the gating network. We conducted extensive experiments on the CONLL2003 dataset, and the experimental results show that our proposed multi-task joint learning enhances the effectiveness of the named entity recognition model.
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/2428/1/012037